paperId
stringlengths 40
40
| title
stringlengths 6
300
| year
int64 2.01k
2.02k
| publicationTypes
sequencelengths 1
4
| Abstract
stringlengths 3
11.9k
| All_Text_with_Titles
stringlengths 21
1.17M
| len_text
int64 21
1.17M
| len_abstract
int64 3
11.9k
|
---|---|---|---|---|---|---|---|
cfef4867af691062f10da5f6ec4533350c4dbb30 | SOCIAL CAPITALS AMONG KELANTAN PERANAKAN CHINESE MUSLIMS IN MALAYSIA | 2,023 | [
"JournalArticle",
"Review"
] | Background and Purpose: Typically, Chinese Muslims have relationship conflict with their non-Muslim family (bonding social capital) and Malay community (bridging social capitals) after converting to Islam. The conflict will affect their social capital. The main aim of this study was to identify the bonding and bridging social capitals among Kelantan Peranakan Chinese Muslim community in Kelantan, Malaysia in the aspects of trust, reciprocity, and cohesion. Methodology: This descriptive study was conducted utilising the sequential explanatory mixed method approaches, involving Chinese Muslims in the Kelantan state. A total of 75 respondents participated in the quantitative study, and five of them involved in the qualitative study. The methods used for sampling were the purposive sampling and snowball sampling. The quantitative data were collected through a survey questionnaire, while the qualitative data were gathered through semi-structured interviews.The findings revealed that the reciprocal and cohesive elements mostly occurred with bridging social capital only. As for the trust aspect, the respondents indicated that they believe in bonding and bridging social capitals only on occasional basis. It was also found that the relationship conflict existed among Chinese Muslim after conversion with their family members who are not converted to Islam and also with the Malay community. |
In this study, quantitative data were gathered through a survey form, and qualitative data were collected through the interview method. Survey form allows researchers to organise questions and receive feedbacks without having to communicate verbally with each respondent (Williams, 2006). In this study, the questionnaire was designed by the researcher based on literature reviews and in-depth interviews. In developing the study questionnaire, the researcher examined related literatures in order to form an operational definition for each variable. These definitions formed the basis for developing a proper survey to be used in this study. This is in line with Yan (2011) who stated that operationalized variables have accurate quantitative measurements. Meanwhile, according to Sabitha (2006), a questionnaire form which presents clear variable definitions has undeniably strong validity. This point is also in conjunction with Robbins (2008) who stated that a questionnaire form which was constructed based on literature reviews would comply with validity and reliability requirement.
Next, the researcher also conducted a series of in-depth interviews. According to Oppenheim (1998), in-depth interviews can assist researchers in constructing survey questions more accurately. Therefore in this study, the researcher was able to ascertain the obtained information from the literature reviews based on the actual realities of Chinese Muslims' life.
A total of four Chinese Muslims were interviewed through open-ended questions. Each informant were asked in an open manner based on the operationalized definitions. After the interview sessions, data were analyzed and the analysis results were used to guide the development of the survey items in the questionnaire.
The constructed questionnaire form in this study achieved the required content and face validity. Content validity refers to the extent to which the measurement of a variable represents what it should be measuring (Yan, 2011). In this study, conceptual and operational definitions were used by referring to the literature reviews in order to obtain content validity of the questionnaire. This is supported by the opinion of Muijs (2004) who stated that content validity can be obtained through the review of past literatures. Through this validity, the researcher confirmed that the study variables were able to measure their actual concepts. As stated by Muijs (2004), content validity can be used to represent a measurable latent concept.
Face validity of the study questionnaire was also obtained whereby the researcher asked several respondents from the Chinese Muslim community through an informal survey.
According to Muijs (2004), by asking related questions to the study respondents, face validity of a questionnaire form can be confirmed as the respondents were asked about whether the questions are relevant or not according to their view. This has guided the researcher to measure each of the studied aspects based on the realities of the respondents' life. Besides, the questions were also helpful for the researcher to ascertain that the constructed questionnaire has met its intended outcomes. As stated by Ary et al. (2010), face validity is where the researchers believe that their questionnaire has measured what it should measure. In addition to the above, a pilot study was also conducted in order to confirm the reliability of the questionnaire. A pilot study can be used to ensure the stability and consistency of a constructed questionnaire in measuring certain concepts, at the same to evaluate whether the questionnaire is properly developed or not.
As for qualitative data, this study utilised the interview method to address the study objectives with the purpose to detail out the quantitative findings. This was due to the use of sequential explanatory design. According to McMillan (2012), sequential explanatory design requires the quantitative data to be further explained, elaborated, and clarified.
Quantative data obtained from the survey were analysed by using two types of statistical procedures, which are descriptive statistics and inference. All data were processed using the SPSS software. As for the analysis of qualitative data obtained from the interview, manual method was used whereby data were analysed through Open Coding, Clustering, Category, and Thematic processes. According to Tiawa, Hafidz, and Sumarni (2012), Open coding is the provision of code to each data so that it can be classified according to the study objectives.
Meanwhile, Clustering is the process of classifying data which have been assigned with open coding in specific categories. Next, the Category process of the data aims to facilitate researchers in dividing the data according to sections in the study. Thematic is the process of classifying each gathered study data based on more specific themes or concepts.
---
ANALYSIS AND DISCUSSION
---
Demographics Profile
Table 1 shows the background of the study respondents in terms of age, gender, and educational level. With regards to age, Figure 4.1 shows that majority of the respondents (32%) were around 46 to 55 years old. This is followed by those with age from 56 years and above (27%) and then those aged between 36 to 45 years (23%). Meanwhile, only few of them were aged around 26 to 35 years and 16 to 25 years (9% respectively). Thus, young groups were smaller compared to the older ones. This was due to the fact that the study respondents were selected among those who involved in the official activities organized by MAIK and MACMA Kelantan. This situation is also in line with Mohd Azmi and Maimunah (2003) who stated that majority of the new Muslim converts who attended the guidance classes are mostly adults.
In terms of gender, the number of female respondents exceeded the male respondents by five percent (i.e., 55% females and 45% males). This indicates a nearly equal distribution in the involvement of male and female Muslim converts in formal activities and guidance classes. Although there were reportedly more females than males among new Muslim converts (Mohd Azmi & Maimunah, 2003), this study has shown that the males' participation in formal activities and guidance classes was not affected by this situation.
As for the educational background, all respondents obtained their formal education where majority (64%) achieved the secondary school level, followed by the primary school level (23%). Meanwhile, only 13 percent of them obtained higher education level whereby 9 percent received university education and the remaining four percent received college education. These data reveal that most respondents in this study obtained their formal education until the school level.
With regards to the reason, more than half (55%) of the respondents converted to Islam due to their interaction with the local Malay community. Meanwhile, 20 percent of them said that they were attracted to Islam. Other reasons for embracing Islam are the marriage factor (16%), due to research and reading (7%), and following or influenced by spouse (3%).
Interaction as the main factor for the non-Muslims to embrace Islam is probably due to the high sociability among the multi-cultural community especially Kelantan Chinese (Mohd Shahrul Imran Lim, 2014). In fact, the interaction among the Kelantan Chinese indicates a high level of assimilation in the way of life of this community group (Teo, 2005). Indirectly, this situation has caused the Kelantan Chinese community to accept Islam. The result in this study is in line with Azarudin and Khadijah (2015) who stated that the interaction among the Muslim community is the main factor of conversion to Islam among the Chinese community in the state of Terengganu.
In addition, the result shows that the original religion of majority of the respondents (87%) before converting to Islam was Buddhism. A total of 7 percent were initially Christians, 5 percent were Confucian, and the remaining 1 percent were atheists. Thus, almost all respondents were originally Buddhists. These results are in line with previous studies which reported that the Kelantan Peranakan Chinese community are still maintaining their religious belief of their ancestors, namely Theravada Buddhism (Teo, 2008;Mohd Roslan & Haryati, 2011;Khoo, 2010).
---
Trust Attitude
The frequency of trust element was measured to indicate how frequent respondents trust their bonding social capital, i.e., their original family who are not converted to Islam, and also bridging social capital, i.e., the Malay community, in terms of social, religious, and financial aspects. Figure 1 shows the level of trust in bonding social capital. A total of 47 percent of trust in bonding social capital were located at the low level, 32 percent were at the moderate level, and 21 percent were at the high level. Meanwhile, Figure 2 indicates the level of trust in bridging social capital. Only 9 percent of respondents had a low level of trust. Furthermore, a total of 49 percent were at the moderate level and 41 percent were at the high level. In conclusion, majority of respondents had a low level of trust in bonding social capital (i.e.
original family who are not converted to Islam). Bridging social capital (the Malay community) obtained a higher level of trust from respondents whereby almost all were distributed at the moderate and high levels. 2 displays the frequency of bonding and bridging social capitals for the trust element.
Overall, respondents seemed to only occasionally trust their bonding social capital (2.6) and bridging social capital (3.4). Respondents sometimes trust their bonding social capital in the aspects of practicing religion (3.2), giving cooperation (2.9), and sharing problems with them (2.7). Nevertheless, as for bridging social capital, respondents occasionally build their trust in different aspects, which are sharing problems (3.4), receiving financial support when needed
(2.6), and lending money (2.5). However, in other aspects, respondents indicated that they frequently trust their bridging social capital in giving cooperation (3.9), speaking about religion (3.6), and practicing religion together (4.1). Nevertheless, some respondents also showed that they seldom trust their bonding social capital, particularly in religious and financial aspects.
Specifically, respondents seldom trust to talk about religion (2.3), to get money when needed According to Informant 1, it was quite difficult to trust own family due to religious differences between them. It is of primary concern that such difference might give a bad impression to the religion being practiced; therefore, trust is not placed in the family, either in the aspects of social, religion, and financial. Furthermore, Informant 1 also provided a statement regarding the frequency of trust in bridging social capital: This statement indicates that Chinese Muslims frequently trust the Malay community because of their concern towards them. However, regarding their trust in the financial aspect, the informants said that Chinese Muslims do not put high trust in Malays. This is as stated by Informant 3 below: " When it comes to money, it's a bit hard.. it's about certain Malays who are reluctant to pay back. And then, before converted to Islam, there was also a perception among the Chinese who said that it's hard for Malays to pay money (debt). They are reluctant to pay.. like my grandmother who sells living chicken.. she let them took items first, but they're behind payments until now.. maybe my grandmother told others that it's difficult to deal with Malays.. and then when I was still a kid.. I always heard Malay people said, it's okay to not settle your debt to Chinese .. they 're kafir (non-believers)."
(2.3),
"oooo.. it
Informant 3 said that it is quite difficult for Chinese Muslims to trust Malays in the financial aspect due to the perception nurtured in them since they were still not converted to Islam; they were even being exposed to that mindset since they were kids. In conclusion, respondents' trust attitude towards bonding social capital was low and happened on occasional basis only. A similar result was observed for bridging social capital whereby the frequency of trust was also occasionally, yet the level of trust in bridging social capital was found to be at the good level. Besides, other statement also indicates that respondents frequently trust their bridging social capital in social and religious aspects. This shows that bridging social capital, i.e., the Malay community, receive a better trust from respondents compared to bonding social capital. i.e., their original family who are not converted to Islam.
The occasional occurrence and moderate level of trust suggest that the Islamisation of an individual in a certain bonding social capital has caused the lack of trust in their bonding social capital. This imply that Islamisation has changed the trust attitude due to the difference of values in religion which was common before. This relationship was seen to be limited because of such difference in values. This is similar to the view by Brennan and Barnett (2009) who stated that a relationship can be retained because of the common values between connected individuals. A limited relationship restricts the interaction among bonding social capitals, whereas development of trust depends on interaction (Amir Zal, 2016). Furthermore, according to Payne (2006), interaction manifests that trust has taken place. It was observed in this study that failure to maintain the interaction between respondents and bonding social capital happened due to the factor that Chinese Muslims individuals were fear of the negative views by bonding social capital towards their newly embraced religion. This is in contrast to other views stating that the interaction of Chinese Muslims was affected when their family reject the Fariza, 2009;Suraya et al., 2013;Marlon et al., 2014). Therefore, it can be concluded that the low level of trust among respondents towards the family was due to the religious difference which made them feel afraid to connect with bonding social capital, and this was not related to the conflicts with bonding social capitals as faced by Chinese Muslims in other states in this country.
Although the trust element to bridging social capital occurs on occasional basis, respondents indicated their frequent trust for bridging social capital in terms of social and religious aspects. This is because, even before converted to Islam, the Chinese Muslim community in Kelantan generally have a high level of societal ability with the Malay community, and this makes it easier for them to respond to changes in the current environment, such as clothing, food, and leisure activities that are similar to the Malay community (Mohd Shahrul Imran Lim, 2014;Pue & Charanjit, 2014). Furthermore, according to Hanapi (1986), the assimilation of Kelantan Chinese has transformed the social and household organisations into Malay as far as crossing their religious boundary, whereby the Chinese community in Kelantan even invite Muslim spiritual leader among Malays to perform prayer for entering new home. Meanwhile, according to Pue and Charanjit (2014), there is no hindrance for Kelantan people of Chinese descent (peranakan) to apply other religious elements if it is believed to be of benefit to them. Nevertheless, these factors did not make the respondents trust in their bridging social capital for problem sharing and financial aspect. This indicates that closeness, assimilation, and Islamisation do not make it easy for them to share problems and obtain financial resources through their bridging social capital. It is also similar to the aspect of lending money to bridging social capital.
---
Reciprocity
The reciprocal element refers to the mutuality that occurs between respondents' bonding and bridging social capitals in social, religious, and financial aspects. Figure 3 illustrates the level of respondents' reciprocity with bonding social capital. The study findings indicate that 47 percent of respondents' reciprocity with bonding social capital were at the moderate level, 31 percent were at the low level, and the other 23 percent were at the high level. Meanwhile,
Figure 4 shows the level of reciprocity for bridging social capital. 52 percent of respondents indicated the high level, 37 percent were at the moderate level, and only 11 percent of them were at the low level. Overall, it can be seen that there was a higher level of respondents' reciprocity with bridging social capital, whereas the level of respondents' reciprocity with bonding social capital was moderate and some of them were even noted at the low level. Table 3 indicates results pertaining to the frequency of respondents' reciprocity with bonding and bridging social capitals. Generally, based on the study findings, reciprocity occurred on occasional basis for bonding social capital (2.7) and frequently for bridging social capital (3.5).
In terms of bonding social capital, reciprocal occurs occasionally in the aspects of mutual respect for religious belief (3.1), visiting each other (3.1), and helping one another (3.0). Then, data also shows that respondents seldom (2.4) help each other in the financial aspect. Other than that, reciprocity also rarely happened in the aspect of exchanging religious opinions (2.3) as well as borrowing and lending money (2.1). As for bridging social capital, the study findings showed that respondents were often reciprocal in the aspects of mutual respect for religious belief (4.0), visiting each other (3.8), helping one another (3.8), and exchanging religious opinions (3.7). However, reciprocity seldom occurred in the aspects of giving financial supports to each other (2.9), likewise for borrowing money from each other (2.7).
Frequencies of reciprocity as discussed above were also supported by findings obtained from the interview with respondents. In terms of bonding social capital, informant 2 mentioned that: ISSN 0127-9386 (Online) http://dx.doi.org/10.24200/jonus.vol8iss1pp357-383 372 "It's difficult to help each other.. or anything.. How to help others when we don't even have enough to eat? Then, my mother wanted to come to help. But, we seldom meet.. So, we don't ask for help.. just on our own. After all, we already have our own family."
According to Informant 2, it is difficult for reciprocity to occur among respondents because they are also living a hard life, and their family seldom come to help them because they rarely meet. Due to this, recriprocity only happened occasionally. In addition, the informant also stated that they do not ask for helps from family and they manage everything by themselves, especially when they already have their own Muslim family. This shows that reciprocity occurs among respondents at moderate level and only on occasional basis with bonding social capital.
As for bridging social capital, the following statement was obtained from the interview with Informant 2: "I have asked for rice from Malays. We didn't have any rice to cook.. we didn't borrow. We asked for one or two cups. The Malay people gave us. If we have some rice, we also gave one or two cups to them. There is no problem with Malays. If they are doing hard, we help them as much as we could. It's because we live together" (Informant 2) Informant 2 mentioned that the Malay community always help them when they are out of rice, and likewise, the informant also helps to the possible extent the Malay community who are in need. This is because they are living together in the social environment of the Malay community.
Thus, it can be implied that respondents' reciprocity with bonding social capital occurred less frequently compared to the one established with bridging social capital which was more frequent. At the same time, reciprocity related to the financial aspect which happened on occasional basis for both social capitals. The occasional occurrence and the moderate level of reciprocity with bonding social capital indicate that respondents' interdependency towards bonding social capital is decreasing after they converted to Islam. This is considering that, according to Aeby, Widmerb, and Carlob (2014), family is the resource of social capital that involves mutually beneficial relationships, as well as information and emotional supports from one another. This is in contrast to the reality of respondents before embracing Islam. Hanapi (2007) stated that interdependence in family is the characteristic of Chinese in Kelantan, where they help and respect each other, and thus having close relationships among them. However, qualitative findings indicated that respondents are living in hardship after converted to Islam, suggesting that reciprocity did not occur among them. Therefore, this community no longer has a strong bonding (family) to support the community members. Inasmuch as stated by Schmid (2000), the role of bonding social capital is to support the community members. Meanwhile, the reciprocal element with bridging social capital occurred frequently at the higher level. This is line with Azarudin (2015) who stated that the tolerance values between the Chinese Muslim and Malay communities, such as in doing daily activities, visiting each other, exchanging food, helping one another, and so on, implies that there is a mutually supportive level between each other. Nevertheless, this level of reciprocity did not happen in the financial aspect. This is considering that the purpose of bridging social capital is not only for obtaining social needs, but also for economy (Grafton & Knowles, 2004). This finding suggests that Islamisation allows respondents to work cooperatively to obtain social and religious benefits from bridging social capital, but not in the financial aspect.
---
Cohesion
The cohesive element indicates respondents' feeling that they are being accepted, belonged to, and loved by both bonding social capital (i.e., family members who are not converted to Islam) and bridging social capital (i.e., the Malay community). Figure 5 shows the cohesive level of bonding social capital. The study findings reveal that 37 percent of respondents indicated the moderate level, 32 percent were at the low level, and 31 percent were at the high level.
Meanwhile, Figure 6 illustrates the cohesive level of bridging social capital. Based on the results, it can be seen that 52 percent of respondents were at the high level, 44 percent were at the moderate level, and only four percent were at the low level. In terms of cohesion with bonding social capital, respondents were distributed almost equally in each level. This is different with the case of bridging social capital in which majority of respondents indicated a high level of cohesion, and only a few had a low level. 4 shows results pertaining to the frequencies of cohesion with bonding and bridging social capitals. As a whole, cohesion with bonding social capital happened on occasion only (2.8), whereby respondents at times feel in agreement (3.3), friendly (3.3), their religion is being respected (3.3), and can talk about religion (2.5) with their bonding social capital. In ISSN 0127-9386 (Online) http://dx.doi.org/10.24200/jonus.vol8iss1pp357-383 375 addition, cohesion in the financial aspect rarely happened, whereby respondents seldom talk about finance (2.2) and rarely could borrow money easily (2.2) from their bonding social capital. This finding was noted by Informant 4 who stated that: "We're not close because we feel that they (family) shun us.. they feel that we shun them. That's why we've become not very close. Because it's different now, right... we don't have the same religion, that's why." (Informant 4).
According to Informant 4, cohesion happened not so frequently in bonding social capital due to the different perception between respondents and bonding social capital. The informant also mentioned that religious difference is the reason explaining the occasional occurrence of cohesion.
With regards to the frequency of cohesion in bridging social capital, Table 3 reveals that cohesion generally happened frequently (3.5). Specifically, respondents are often being friendly (4.0) and in agreement (3.9) with bridging social capital. Furthermore, the Islamic religion which they have embraced are frequently being respected (3.9) and they often talk about Islamic religion (3.9). However, when it comes to financial aspect, their cohesion only took place occasionally. At times, respondents can borrow money easily (2.7), and only at occasion where they could talk about financial aspect (2.9) with their bridging social capital. This finding indicates that cohesion between respondents and bridging social capital frequently occurs, except for those involving the financial aspect. This point was also agreed by informants, such as the statement by Informant 3: In conclusion, the cohesive element occurs more frequently between respondents and bridging social capital, compared to bonding social capital which occurs on occasion basis. The occasional occurrence and moderate level of cohesive element as found in this study is different from the reality of the Chinese community. According to Lyndon, Wei, and Mohd Helmi (2014), the Chinese community has a close relationship with their family. In this study, respondents' conversion to Islam did not lead to the persistence of cohesion between respondents and bonding social capital. Based on qualitative findings, respondents generally stated that there are different perceptions between respondents and bonding social capital whereby in the relationship context, respondents felt that they are being shunned by family, and likewise for their family. The lack of cohesion among respondents in this study contradicts with the views of Amran (1985), Mohd Syukri Yeoh and Osman (2004), andOsman (2005) stating that the Chinese community have more negative perceptions towards Chinese people who converted to Islam than other religions. On the contrary, our study findings were only related to the family's perception that the Islamic converts do not want to be a part of the family anymore.
Furthermore, the transformation of Chinese Muslims' way of life to adapt with the living of the Malay community (Razaleigh et al., 2012) was also observed to be the factor causing the lack of cohesion in respondents' bonding social capital. This is because the Islamisation of Chinese Muslims is regarded as they are becoming Malay, and thus causing Chinese Muslims to abandon their life as a Chinese. Another factor that negatively affect respondents' cohesive elements is because religion has a disintegrative effect in which its presence has built a subtle and thin boundary between those who have embraced a new religion and those who are still holding the inherited old world (Taufik, 2009). Therefore, respondents' Islamisation has given a certain perception towards respondents and their bonding social capital, and thus reducing the cohesion among them. The fact is that, the cohesive element is important for the community members to feel that they are being accepted by and belonged to the community, as well as having 'a sense of own place' in the community (Dale & Sparkes, 2008).
Furthermore, the high level and frequent occurrence of respondents' cohesion with bridging social capital as noted in this study are not in line with Razaleigh et al. (2012) who found that Chinese people are socially less integrated with the Malay community after they embraced Islam. Similarly, Marlon et al. (2014) who stipulated that racial sectionalism is still taking place between the Chinese Muslim and Malay communities, reported that the levels of understanding, acceptance, and integration of Chinese Muslims towards the Malay culture are still moderate. This shows that there is a difference between Chinese Muslims in Kelantan and those in other states in this country in terms of the cohesive aspect.
---
CONCLUSION
The Chinese Muslim community in this study indicated that their social capitals in the aspects of trust, reciprocity, and cohesion with family members who are not converted to Islam (i.e., bonding social capital) only occur on occasional basis after their conversion to Islam. A different result was observed for respondents' bridging social capital, i.e. the Malay community, whereby their social capitals in the aspects of trust, reciprocity, and cohesion has taken place frequently and most of them were noted at the high level. Thus, it can be concluded that potentials of community, which are social capitals, can be affected by the religious factor.
Other than that, bonding social capital could also give implications on maintaining the sustainability of relationship and affect its ownership because those elements bring impacts on the interaction between respondents and bonding social capital. Limited interaction in a strong relationship, like bonding social capital, will lead to the lack of psychological support, as well as negative impacts on the quality of relationship, mutual assistance, and togetherness between respondents and bridging social capital. The lack of those elements can also affect respondents'
harmonious living, and it might as well jeopardise their daily life. Moreover, it is also of concern that respondents' low level of bonding social capital could cause them to lose social and economic supports in own community, and it is also worrying that it might break the ties between respondents and their bonding social capital. Indirectly, this could lead to negative perceptions among non-Muslim family members towards Islam and the Muslim community.
Meanwhile, bridging social capital (i.e. the Malay community) was found to give positive implications to the Chinese Muslim community whereby they can obtain various benefits from the close social bridging. Moreover, it also provides a wider network to respondents when their bonding social capital becomes limited in terms of closeness. Other than that, bridging social capital also contributed to respondents' collective actions in solving problems, while increasing the closeness among the local community in order to provide a good social environment to the respondents' group of community. However, in view of the negative aspect, such high level of closeness and frequent occurrence of this element might give the implication revealing respondents as the absolute property of bridging social capital (the Malay community), yet in fact they are actually a part of bonding social capital (family of origin). It is worrying that this element could lead to the emergence of negative perceptions which jeopardise respondents' bonding social capital. It is even worse when functions of bonding social capital are no longer needed by respondents, although there are still more spaces for them to re-establish their relationship with bonding social capital after they converted to Islam.
Therefore, it is suggested that the Chinese Muslim community should improve and strengthen their relationship and interaction with bonding social capital. Chinese Muslims should also bear in their mind that the Islamic religion as they have embraced is highly emphasising on the need to establish a good relationship with original family who are non-Muslims. Through this effort, the Chinese Muslim community can regain their position as part of their bonding social capital, even with differences in the aspects of religion and values. From another context, Chinese Muslims should also continually strengthen and maintain the relationship with bridging social capital so that such good relationship can give more contributions towards the quality of life of the Chinese Muslim community. | 31,874 | 1,402 |
2d2f39d370913922d06f85474eca660b411ddf46 | The roots of healthy aging: investigating the link between early-life and childhood experiences and later-life health | 2,023 | [
"Editorial",
"JournalArticle"
] | Whilst early-life conditions have been understood to impact upon the health of older adults, further exploration of the field is required. There is a lack of consensus on conceptualising these conditions, and interpretation of experiences are socially and culturally dependent. To advance this important topic we invite authors to submit their research to the Collection on 'The impact of early-life/childhood circumstances or conditions on the health of older adults' . |
resources during childhood [2], with cascading effects on health during adulthood and late adulthood. Proponents of the latency model suggest that poor childhood conditions could have a long-term and irreversible influence on individuals' health trajectories patterns [3]. For example, malnutrition in childhood could weaken immune systems and contribute to lower growth rates of musculoskeletal systems, which could further influence joint inflammation in later life [4].
Adverse experience and poor health care resources in childhood could also impose a long-term adverse impact on brain development, which could contribute to cognitive impairment at older age [5]. Furthermore, the pathway model suggests that childhood conditions could indirectly affect health in later life through adulthood conditions [1]. Life-course perspective and cumulative inequality theory have further enriched our understandings of protective and risk factors in early life and how they affect the health of older adults [3].
Given that the first 1000 days of life between conception and a child's second birthday have short and longterm effects on human health and function, and are identified as the most crucial window of opportunity for interventions [6], a growing number of studies have In recent years, there has been a growing trend among social scientists and public health researchers to employ life course data and analytical techniques as means of better comprehending the biological, social and environmental factors that determine health outcomes during the later stages of life. By tracing the association between social circumstances and health over the course of an individual's life, from childhood through to older age, this approach seeks to develop a more nuanced understanding of this complex relationship.
The importance of early life experiences on people's health throughout the life-course is not novel. Decades of research have identified the impact of early life experiences on later health [1]. Indeed, recent studies have found a number of relevant childhood variables, including but not limited to socioeconomic status, adverse experiences (e.g., abuse and neglect), disease, and health
The roots of healthy aging: investigating the link between early-life and childhood experiences and later-life health Nan Lu 1 , Peng Nie 2 and Joyce Siette 3* investigated the linkage between in utero circumstances and health in later ages.
From a life-course perspective, the fetal origins hypothesis posits that fetal exposure to an adverse environment, in particular to in utero malnutrition, is associated with increased risks of cardiovascular and metabolic diseases of adults and older ages [7]. A large body of literature has validated this hypothesis in older populations. For instance, a strand of existing studies has linked fetal malnutrition or famine with later-life health problems, including decreased glucose tolerance, schizophrenia, heart disease, obesity, type 2 diabetes, increased mental illness and mortality. Additionally, prior literature also associated other in-utero risk factors such as exposure to conflict and violence [8], and influenza pandemic [9], with ill health in older age.
Nonetheless, life-course studies on the nexus between prenatal adversities and later health suffer from threats from mortality selection [10]. As such, the early-life impacts of adversities on later-life health may be weak or even disappear when the influence of selection outweighs the detrimental effects of fetal exposure to adversities or fetal exposure indeed has no long-run health impacts [10]. We acknowledge that estimates of the effects of in utero exposures on later-life health may be sensitive for different analytic approaches and measures of health outcomes.
Although there are a number of studies concerning the "long-arm" of childhood conditions, there remains major research gaps. First, the interpretations of childhood experiences are culturally and socially dependent. Therefore, empirical evidences across countries and culture, especially those from developing countries and regions, are needed to test these hypotheses.
Additionally, there is a lack of consensus on the conceptualization and measurement of childhood conditions. While many studies assess one or several aspects of childhood conditions, future studies are recommended to use a comprehensive set of measures of childhood conditions to test their combined effects on an individual's health in later life simultaneously. Indeed, the exploration of multiple experiences and exposures will enable a better assessment of the breadth of childhood adversity and opportunities and its link with both adults' and older adults' health.
Longitudinal research allows us to not only test the baseline level (i.e. intercept) and change rate (i.e. slope) of health outcomes and how they were affected by childhood conditions, but also examine the mechanisms linking childhood conditions, adulthood conditions, and health outcomes in later life.
Enhancing our comprehension of the cumulative impact of childhood experiences across various key timepoints can promote multidisciplinary prevention strategies that emphasize early intervention. By providing collaborative services that address diverse adversities affecting individuals and families throughout their lives, these efforts can deliver integrated programs that offer support and decrease the likelihood of future generations being impacted by negative experiences.
Optimizing the long-term health of individuals requires an in-depth understanding of the roots of healthy aging, from early experiences to mid-life health, and its associated impact on later-life health. Physical, social, mental and biological environments are likely to play a synergistic, critical, yet complex role in promoting and maintaining healthy aging. In this Collection, we aim to present original research and evidence synthesis to advance our understanding of the relationship between early experiences, later-life health, and the physical, social, and organizational aspects of being. We particularly welcome contributions that explore this relationship and offer insights into optimizing aging and wellbeing. We hope that this collection will empower healthcare professionals, researchers and policy makers to find innovative ways to enhance care and promote healthy aging on a population-level.
---
Data Availability
Not applicable.
---
Authors' contributions
All authors conceived and drafted the Editorial. PN and JS revised the Editorial. All authors read and approved the final manuscript.
---
Declarations
Ethics approval and consent to participate Not applicable.
---
Consent for publication Not applicable.
Competing interests NL, PN and JS are guest editors of the Collection. NL, PN and JS are Editorial Board members.
---
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 7,000 | 470 |
eeb0c709d394f37f8bee71abf3b1797da8072106 | Excess mortality and the COVID-19 pandemic: causes of death and social inequalities | 2,022 | [
"JournalArticle"
] | Background: During the coronavirus diseases 2019 (COVID-19) pandemic, population's mortality has been affected not only by the risk of infection itself, but also through deferred care for other causes and changes in lifestyle. This study aims to investigate excess mortality by cause of death and socio-demographic context during the COVID-19 pandemic in South Korea. Methods: Mortality data within the period 2015-2020 were obtained from Statistics Korea, and deaths from COVID-19 were excluded. We estimated 2020 daily excess deaths for all causes, the eight leading causes of death, and according to individual characteristics, using a two-stage interrupted time series design accounting for temporal trends and variations in other risk factors. Results: During the pandemic period (February 18 to December 31, 2020), an estimated 663 (95% empirical confidence interval [eCI]: -2356-3584) excess deaths occurred in South Korea. Mortality related to respiratory diseases decreased by 4371 (3452-5480), whereas deaths due to metabolic diseases and ill-defined causes increased by 808 (456-1080) and 2756 (2021-3378), respectively. The increase in all-cause deaths was prominent in those aged 65-79 years (941, 88-1795), with an elementary school education or below (1757,, or who were single (785,, while a decrease in deaths was pronounced in those with a college-level or higher educational attainment (1471,.No evidence of a substantial increase in all-cause mortality was found during the 2020 pandemic period in South Korea, as a result of a large decrease in deaths related to respiratory diseases that offset increased mortality from metabolic disease and diseases of ill-defined cause. The COVID-19 pandemic has disproportionately affected those of lower socioeconomic status and has exacerbated inequalities in mortality. | Background
The coronavirus disease 2019 (COVID-19) pandemic has posed a serious and persistent threat to global public health and has brought unprecedented changes to daily life. Moreover, the unprecedented scope of the worldwide pandemic has led to extraordinary demands on the healthcare system, resulting in critical shortages of medical resources and serious reductions in social capital [1]. Thus, to alleviate the burden of the pandemic, numerous countries have implemented a number of non-pharmaceutical interventions, such as social distancing and individual hygiene practices, although there have been differences in both the intensity and effectiveness of these interventions [2].
Along with the risk of infection itself, the collateral effects of the pandemic have affected population health and may be associated with mortality risk through various pathways [3][4][5][6][7][8][9][10]. During the pandemic, medical resources and mobilisation have been concentrated on patients with confirmed COVID-19, and less critical medical services for non-COVID-19 patients with less severe or less urgent diseases and/or those at a lower age-related risk have frequently been postponed or cancelled [3,4]. In addition, previous studies have reported that medical accessibility is closely associated with socioeconomic status [5,6], and that changes in lifestyle and health behaviours during the pandemic (such as wearing masks and engaging in fewer social and physical activities) might exhibit non-uniform effects on people with heterogeneous characteristics, with differential findings according to disease type, age, sex, educational level, and marital status [5,[7][8][9][10]. In sum, these results indicate that limited access to medical services during the pandemic might disproportionally affect individuals depending on their medical and socioeconomic status.
Excess mortality, defined as the increase in deaths compared to the expected number of deaths, has been widely used as a representative indicator for the damage caused by the pandemic with respect to human health [11]. Multiple studies have reported excess mortality attributed to the pandemic [12][13][14][15][16]. Nevertheless, although it can be strongly conjectured that the health damage related to the pandemic is heterogeneous among populations, most previous studies on pandemic-associated excess mortality have solely addressed total mortality (i.e., without consideration of causes of death and variation according to individual characteristics), and only a few studies have evaluated cause-, sex-, age-, race-, or income level-specific impacts [13][14][15][16][17][18]. However, an in-depth examination of cause-specific and individual-specific excess mortality can provide scientific evidence informing interventions in vulnerable populations as well as public health resource allocation. We note that South Korea (hereafter termed Korea) has been evaluated as a country that has successfully responded to the pandemic with widespread testing and epidemiological investigations at the initial pandemic stage; therefore, assessing excess deaths occurring due to the pandemic in Korea can provide an informative evidence base for public health researchers and policymakers [19,20]. Nevertheless, although the socio-demographic characteristics may be involved in shaping the consequences of COVID-19, they have not been considered in previous studies in Korea. Hence, this study aimed to investigate nationwide excess mortality during the 2020 pandemic period in Korea and to identify relevant factors that could affect excess mortality, including causes of death and individual characteristics (i.e., age, sex, educational level, and marital status). We hypothesised that we would observe social inequities in mortality outcomes during the pandemic period.
---
Methods
---
Statement on guidelines
This study complies with relevant guidelines and regulations. All our dataset has been publicly available and did not include any identifiable information. This study was carried out using only data from Statistics Korea, Korea Disease Control and Prevention Agency, and the Korea Meteorological Administration, and there was no direct involvement of participants. Thus, patient consent procedures and ethics approval were not required for this study.
---
Data
We downloaded data on deaths occurring between 2015-2020 in all 16 regions of Korea from Statistics Korea [21]; the information available for death case with individual characteristics: date of death, age, sex, education level, marital status, and underlying causes of death (classified according to the 10 th Revision of the International Classification of Diseases; ICD -10). From this data, we calculated the daily number of deaths from all causes and by eight leading causes of death and the individual characteristics. We also collected data on confirmed cases of COVID-19 occurring in 2020 from Korea Disease Control and Prevention Agency [22]. Data on daily average temperatures in 2015-2020 across 16 regions in Korea were obtained from the Korea Meteorological Administration [23].
---
Causes of death
We considered deaths from all causes as well as due to eight leading causes of death based on the main category (i.e. the first letter of the code) of the ICD-10 code, including infectious diseases, neoplasms, metabolic diseases, circulatory diseases, respiratory diseases, genitourinary diseases, ill-defined causes, and external causes (see Supplementary Table S1 for more detailed information). The ICD-10 codes for COVID-19 deaths (U07.1, U07.2) were excluded from this study to identify the collateral impacts of the pandemic on mortality and the COVID-19 deaths accounted for only a small portion of total deaths (950 in total; 0.3% of total deaths) (see Supplementary Table S2 for more detailed information).
---
Individual characteristics
To investigate the impact of COVID-19 on excess deaths according to socio-demographic factors, death cases were aggregated by sex, age (< 65, 65-79, and ≥ 80 years), education level (elementary school, middle school, high school, and ≥ college), and marital status (single, married, other [e.g., divorced, widowed]).
---
Two-stage analyses
We conducted two-stage interrupted time-series analyses to quantify the excess risk of mortality during the COVID-19 pandemic period as compared with the prepandemic period in Korea, following a methodological approach delineated in previous studies [24,25].
In the first stage, a quasi-Poisson regression model was applied to each of the 16 regions in Korea [26]. In the time-series analysis, the usage of other methods (e.g., autoregressive integrated moving average model) [27] was limited, because we used the death count data which takes values in non-negative integers. Thus, we performed quasi-Poisson regression with seasonality and long-term trend adjustments using a spline function [26]. We used the number of days from the first COVID-19 confirmed case to estimate the time-varying risk during the outbreak period (January 20 to December 31, 2020). We included a linear term for date to model long-term trends, a term for days of the year to control for seasonality, and dummy indicators for the day of the week to adjust for variation by week. We also modelled the relationship between average daily temperature readings and mortality using a distributed lag nonlinear model [28,29]. The characteristics of the 16 regions considered in this study are presented in Supplementary Table S2.
In the second stage, we pooled the region-specific coefficients of excess risk obtained during the COVID-19 period to the nationwide level using a mixed-effects multivariate meta-analysis approach [30]. The best linear unbiased prediction (BLUP) was then calculated for each of the 16 regions to stabilise the variability due to the large differences in population size between regions, leading to more precise estimates [31].
More detailed information on the two-stage interrupted time-series design employed herein can be found in the Supplementary Material.
---
Quantification of excess deaths
The relative risk (RR) of excess mortality was calculated to quantify excess deaths attributable to COVID-19. We obtained the predicted values for excess mortality via BLUP region-specific estimates and exponentiated these values to obtain the RR for each day of the outbreak period in each region. The daily number of excess deaths was computed as n * (RR -1)/RR , where n represents the number of deaths per day. We aggregated the daily excess number of deaths by pandemic wave and plateau for each of the 16 regions and for the entirety of Korea. The definition of the COVID-19 period is presented in the Supplementary Material. We computed empirical 95% confidence intervals (eCIs) for the coefficients using Monte Carlo simulations.
We repeated the main analysis described earlier for stratified analyses to estimate the number of excess deaths for each eight leading causes of death and individual characteristics.
---
Sensitivity analyses
We conducted several sensitivity analyses to assess the robustness of our findings. More specifically, we applied five and six internal knots in the quadratic B-spline function for days since the first COVID-19 confirmed case, four and six knots in the cyclic B-spline function for days of the year, and 14 and 28 days of lag period in the distributed lag nonlinear model.
---
Results
---
Excess all-cause mortality
Total deaths and estimated excess deaths during the pandemic period between February 18 and December 31, 2020 are reported in Table 1. During this period, 260,432 deaths were reported in Korea. The number of excess deaths from all causes was estimated as 663 (95% eCI: -2356-3584), indicating that there was no evident excess in total mortality during the pandemic period as compared with the pre-pandemic period.
---
Excess cause-specific mortality
Nevertheless, we found heterogeneous excess deaths when evaluating cause-specific deaths (Table 1). For example, the number of deaths related to respiratory diseases decreased by 4371 due to the pandemic (95% eCI: 3452-5480), corresponding to a 12.8% percentage decrease in this mortality outcome (10.4%-15.5%). However, excess deaths due to metabolic diseases and "ill-defined cause" diseases attributable to the pandemic increased by 808 (456-1080) and 2756 (2021 to 3378), corresponding to percentage increases of 10.4% (5.6%-14.4%) and 11.1% (7.9%-14%), respectively.
---
Excess mortality by individual characteristics
We found that the impact of the pandemic on mortality was disproportionate according to socio-demographic characteristics (Table 1, Fig. 1). For example, the excess mortality attributable to the pandemic was prominent in those aged 65 to 79 years (excess deaths 941, 95% eCI: 88-1795; percentage excess 1.2%, 95% eCI: 0.1%-2.4%), those with an elementary school or lower educational level (1757, 371-3,030; 1.5%, 0.3%-2.6%), and in the single population (785, 384-1174; 3.9%, 1.9%-5.9%). However, we found a decrease in the mortality rate during the pandemic in people with a college-level or higher educational attainment (1471, 589-2328; 4.1%, 1.7%-6.4%).
---
Temporal trends in excess mortality
For all-cause deaths, we found fluctuations and inconsistent patterns in the temporal trend for excess risk (RR) and the percent excess in mortality across waves and plateaus of the COVID-19 pandemic (Fig. 2). The excess risk of mortality started decreasing from the beginning of the 1 st plateau, then gradually increased until it reached its peak in the 2 nd wave. Subsequently, the risk continued to decrease, with a sharp decline evident during the 3 rd wave. For cause-specific deaths, three types of deaths showed obvious and consistent patterns during the pandemic period. Namely, we found a decrease in mortality related to respiratory diseases and an increase in mortality due to both metabolic diseases and diseases with illdefined causes (see Supplementary Table S3).
---
Excess mortality both by causes of death and individual characteristics
Excess mortality due to cause-specific deaths and according to individual characteristics is shown in Fig. 3 and Supplementary Table S4. Respiratory disease-related mortality showed an evident reduction during the pandemic in all age groups. However, excess mortality due to ill-defined causes prominently increased in those aged 80 years or older with percentage excess of 15% (8.4%-20.8%). Moreover, across all specific causes, an increase in mortality due to the pandemic was generally more evident in those with lower education levels (high school or lower), while a decrease in mortality was more obvious in those with higher education levels (college or higher). The exception to this trend was with regard to respiratory disease deaths, which showed reduced mortality across all educational groups. This pattern (i.e., a higher excess mortality in those with lower education levels) was more prominent for metabolic and ill-defined causes of death. Excess mortality attributable to the pandemic was generally more pronounced across all specific causes in the single population.
---
Sensitivity analysis results
Sensitivity analyses were performed to assess whether these findings were consistent with the modelling specifications; the sensitivity analysis results revealed the robustness of our main results (see Supplementary Tables S5 andS6).
---
Discussion
This study investigated nationwide excess mortality during the COVID-19 pandemic in Korea according to cause of death and individual characteristics. In the total population, although there were no substantial excess deaths evident during the pandemic period when estimated except for deaths from COVID-19, we found disproportionate impacts of the pandemic on mortality by cause of death, education level, and marital status. In general, the excess mortality attributable to the pandemic was more evident in deaths from metabolic and ill-defined diseases, in those with lower education levels, and in the single population.
To our knowledge, several previous studies have evaluated trends in excess mortality during the first year of the pandemic. For example, a study evaluating mortality trends in 29 industrialised countries reported an increase in mortality due to the pandemic [13]. Another study evaluating trends in 67 countries also showed that most countries experienced an increase in mortality during the pandemic, with the exception of some countries with higher testing capacities [32]. Nevertheless, we did not detect evident increases in mortality due to the pandemic in Korea in the current study. We conjecture that this pattern may be closely associated with the early and extensive testing and comprehensive epidemiological investigations implemented in Korea in response to the pandemic, which have been identified as effective countermeasures in reducing the spread and mortality rate associated with COVID-19 [19,20].
Although some previous studies have reported excess mortality during the pandemic in Korea, the results have been mixed. For example, some studies showed no evident increase in annual deaths in 2020 [13,17], whereas another study reported a decrease in mortality in 2020 [32]. However, these previous studies were based on weekly or monthly mortality data. Thus, we believe that our study, which was based on daily data and employed a cutting-edge standardised time-series analysis, can provide more precise estimates than these prior investigations.
This study identified that the impact of the pandemic on mortality was disproportionate in accordance with cause of death, age group, educational level, and marital status. First, we found that a large decrease in mortality from respiratory diseases during the pandemic was the major factor in the non-increased mortality pattern evident in this study, and that this trend may have offset increases in the mortality rate due to metabolic and Fig. 3 Percentage excess in mortality (with 95% empirical confidence interval) during the COVID-19 pandemic period (February 18 to December 31, 2020) in Korea for each cause of death by age, education, and marital status. Abbreviations: Infectious = Certain infectious and parasitic diseases, Metabolic = Endocrine, nutritional and metabolic diseases, Circulatory = Diseases of circulatory system, Respiratory = Diseases of respiratory system, Genitourinary = Diseases of genitourinary system, Ill-defined = Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified, External = Injury, poisoning and certain other consequences of external cause ill-defined diseases that occurred during the pandemic period. This result is consistent with that of a previous study that examined the decline in the incidence and mortality of respiratory diseases in Korea during the pandemic period [8,17]. We note that, the Korean government has generally implemented high levels of social distancing, personal hygiene, and mask-wearing since the initial stage of the pandemic, and that these are major factors in the decrease in respiratory virus infection evident in this country [19,20].
However, we found that prominent excess mortality attributable to the pandemic was observed in metabolic disease-related deaths, and that this excess mortality was more evident in those with lower education levels and single marital status. These results could partly be explained by the unintended impact of interventions against the pandemic [7]. For example, social distancing and fewer outdoor activities could increase time spent indoors and lead to worsened health behaviours, such as unhealthier diets and less exercise. Moreover, restricted and reduced accessibility to medical services during the pandemic could negatively affect consistent care for patients with chronic metabolic diseases and the impacts of this decreased accessibility to medical services could be more pronounced in those with low socioeconomic status, which may result in fewer hospital visits and medications.
In addition, we found that excess mortality due to the pandemic was evident in regard to deaths from illdefined causes. Interestingly, we found that the number of deaths due to ill-defined causes increased throughout 2020 in Korea, and that this pattern was not observed for other causes of death. Moreover, this increasing pattern was more pronounced in those aged 80 years or older. From our study data, we found that 67.8% of deaths from ill-defined causes in 2020 occurred in those aged 80 years or older, and that senility (one of the specific causes of "ill-defined cause" mortality) accounted for nearly half (49.8%) of these deaths (see Supplementary Tables S7 andS8). Although additional investigations are needed, our results imply the possibility that older individuals at the end of life may have reduced their hospital visits due to the pandemic. Thus, exact causes of death might not be reported accurately for this population. Therefore, we cautiously surmise that these increased cases of ill-defined mortality may have been related to increased deaths due to ill-defined causes during the pandemic.
We also found a disproportionate impact of the pandemic regarding individual characteristics associated with socioeconomic inequality. First, we found that an increase in mortality during the pandemic was evident in those aged 65-79 years, but we did not detect obvious excess mortality in those aged 80 years or older. We conjecture that this may be related to the fact that the young older population (i.e., those less than 80 years of age [approximately]) may be more likely to delay or cancel their medical care services voluntarily or involuntarily. In other words, we conjectured that the results might be related to the fact that most medical services prioritized older populations and COVID-19 patients during the pandemic period. Also, considering the "depletion of susceptible" or "healthy survivor" effect, those in the very old age group may be less susceptible to risk factors that can lead to death than those who died earlier in life [4]; however, more in-depth studies are required in the future to support this conjecture.
The above speculation is substantiated by the following figures. In Korea, hospital visits, hospitalisations, and emergency department (ED) visits during the pandemic decreased to a greater degree in those aged 65-79 years than in those aged 80 years or older [33][34][35], In addition, the impact of restricted medical access on deaths in those aged 65-79 years can be inferred given that the leading causes of death in this age group in 2020 were diseases that require regular and timely care, such as neoplasms and circulatory diseases (see Supplementary Table S7). Moreover, when investigating the older population, the "depletion of susceptibles" or "healthy survivor" effect should be kept in mind; more specifically, survival to very old age may indicate that individuals are less susceptible to risk factors that can lead to death, including the impact of COVID-19 pandemic, than are those who died earlier in life [36,37].
One of our main findings was that excess deaths due to the pandemic were more prominent in those with low educational levels and in the single population, and that this pattern was common to most causes of death. Previous studies have reported that people with low educational levels (i.e. a proxy for low socioeconomic status) generally have worse health outcomes as well as more limited access to health care resources as compared with highly educated people [38]. Single people are also likely to have worse health status than married people, although there is no consensus as to whether marriage provides a protective effect against adverse health outcomes or whether less healthy or socially disadvantaged individuals are more likely to remain unmarried [39]. It should also be considered that unmarried people may have lived with their parents and received protection from their families [40], but it was not observed in this study. Moreover, during the pandemic, single or unmarried people might become more socially isolated, and people with lower socioeconomic status might face more threats to health, including reduced necessary care, unemployment, financial insecurity, lack of psychosocial resources, and less healthy lifestyles [9].
Regarding the temporal trend in regard to the impact of the pandemic on all-cause mortality, we found that the associated excess risk increased during the 2 nd wave of the pandemic period and then sharply decreased during the 3 rd wave, resulting in an offset of the total excess in mortality. This reflects the fact that the trend in total deaths during the period corresponding to the 3 rd wave (October 26 to December 31) in 2020 was lower than that in the previous period (see Supplementary Fig. S1). In particular, our results imply that the reduction in allcause mortality in the winter season, corresponding to the 3 rd wave in this study, may be associated with a prominent decrease in mortality from respiratory diseases during that period (Supplementary Table S3). Preventive behaviours for ameliorating the spread of the pandemic, such as wearing masks and maintaining personal hygiene, can reduce the risk of infection-related mortality, and these effects might be more pronounced in the winter (when respiratory infections commonly occur) [8]. Nevertheless, this study only investigated trends during the first year of the COVID-19 pandemic, and additional studies are needed to explore long-term trends in excess mortality attributable to the pandemic. Some limitations of our study must be acknowledged when interpreting the findings reported herein. First, we did not account for seasonal influenza activity and other time-varying confounders, which can affect the relationship between COVID-19 and mortality, as relevant data were not available. Future studies should consider how the time-varying confounders can be controlled in the model. Second, in addition to the insufficient confounders, our study design (an ecological study with a timeseries design) is limited in showing the causal effect of COVID-19. Therefore, further studies with more elaborate data and robust methods for counterfactual analyses, such as the synthetic control method. Finally, we only examined excess deaths during the COVID-19 period in 2020, which may be insufficient to capture the prolonged effects of the pandemic on mortality. This issue can be addressed by additional investigations regarding trends in excess mortality due to the pandemic over a longer period.
Despite these drawbacks, a notable strength of our study is the application of a cutting-edge two-stage interrupted time-series design that allows for flexible estimation of excess mortality and adjusts for temporal trends and variations in known risk factors. Another major strength of our study is that we performed this analysis using officially reported nationwide death data with daily count units and stratified the primary findings by cause of death as well as by individual characteristics, thus offering comprehensive and evidence-based information on the impact of the pandemic in Korea in regard to informing future public health research, policy decisionmaking, and resource allocation.
---
Conclusion
In conclusion, our study indicates that no excess in allcause deaths occurred during the COVID-19 pandemic period in Korea during the year 2020, although differential risks of mortality were evident across specific causes of death and individual characteristics. The findings of our study highlight the need for efforts to address disproportionate access to medical care as well as inequities in health status that have been exacerbated by the pandemic and likewise provide important information regarding the allocation of resources for interventions aimed at addressing inequities in medical and socioeconomic status.
---
Availability of data and materials
The data used for this study will be made available to other researchers upon reasonable request. Data for this study has been provided by Statistics Korea, Korea Disease Control and Prevention Agency, and Korea Meteorological Administration, and the data are publicly available.
---
Supplementary Information
The online version contains supplementary material available at https:// doi. org/ 10. 1186/ s12889-022-14785-3.
Additional file 1: Supplementary Table S1. Causes of death and corresponding ICD-10 codes. Supplementary Table S2. Characteristics of the 16 regions in Korea. Supplementary Methods. Supplementary Table S3. Number of total deaths and estimated excess deaths (with 95% empirical confidence intervals) during the COVID-19 pandemic period (February 18 to December 31, 2020) in Korea by cause of death and phase of the pandemic. Supplementary Table S4. Percentage excess in mortality (with 95% empirical confidence interval) during the COVID-19 pandemic period (February 18 to December 31, 2020) in Korea by cause of death and individual characteristic. Supplementary Table S5. Percentage excess in mortality (with 95% empirical confidence interval) by cause of death for main model and each sensitivity analysis. Supplementary Table S6. Percentage excess in mortality (with 95% empirical confidence interval) by individual characteristic for main model and each sensitivity analysis. Supplementary Table S7. Number of total deaths (%) during the COVID-19 pandemic period (February 18 to December 31, 2020) in Korea by cause of death and individual characteristic. Supplementary Table S8. Number of total deaths (%) by main specific causes of Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified (R00-R99) in 2020. Supplementary Figure S1. Temporal trends of total deaths during the study period (2015-2020).
Authors' contributions J.O., J.M., C.K., and W.L. conceived and designed the study. J.O. performed the statistical analysis and wrote the manuscript. W.L. and H.K. supervised all manuscript procedures. All authors provided input to the preparation and subsequent revisions of the manuscript. The author(s) read and approved the final manuscript.
---
Declarations
Ethics approval and consent to participate Not applicable.
---
Consent for publication
Not applicable.
---
Competing interests
The authors have no actual or potential competing interests to declare.
---
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 28,683 | 1,831 |
a9368ac82a94ef9ce33cb140ba0526fafbb9e6b4 | Determining Public Opinion of the COVID-19 Pandemic in South Korea and Japan: Social Network Mining on Twitter | 2,020 | [
"JournalArticle"
] | Objectives: This study analyzed the perceptions and emotions of Korean and Japanese citizens regarding coronavirus disease 2019 . It examined the frequency of words used in Korean and Japanese tweets regarding COVID-19 and the corresponding changes in their interests. Methods: This cross-sectional study analyzed Twitter posts (Tweets) from February 1, 2020 to April 30, 2020 to determine public opinion of the COVID-19 pandemic in Korea and Japan. We collected data from Twitter (https://twitter.com/), a major social media platform in Korea and Japan. Python 3.7 Library was used for data collection. Data analysis included KR-WordRank and frequency analyses in Korea and Japan, respectively. Heat diagrams, word clouds, and rank flowcharts were also used. Results: Overall, 1,470,673 and 4,195,457 tweets were collected from Korea and Japan, respectively. The word trend in Korea and Japan was analyzed every 5 days. The word cloud analysis revealed "COVID-19", "Shinchonji", "Mask", "Daegu", and "Travel" as frequently used words in Korea. While in Japan, "COVID-19", "Mask", "Test", "Impact", and "China" were identified as high-frequency words. They were divided into four categories: social distancing, prevention, issue, and emotion for the rank flowcharts. Concerning emotion, "Overcome" and "Support" increased from February in Korea, while "Worry" and "Anxiety" decreased in Japan from April 1. Conclusions: As a result of the trend, people's interests in the economy were high in both countries, indicating their reservations on the economic downturn. Therefore, focusing policies toward economic stability is essential. Although the interest in prevention increased since April in both countries, the general public's relaxation regarding COVID-19 was also observed. | I. Introduction
Following the novel coronavirus disease (COVID-19) outbreak in Wuhan, China, in December 2019, COVID-19 had spread rapidly to other countries, including Korea and Japan, two of the closest countries to China. Korea and Japan's first confirmed case was reported on January 19, 2020 and January 16, 2020, respectively. Considering its severity, the World Health Organization declared this disease as a pandemic on March 11, 2020 [1].
By April 30, 2020, the incidence rate of COVID-19 in Korea (10,765 confirmed cases, 247 deaths) and Japan (14,119 confirmed cases, 435 deaths) showed downward trends, due to the responses of both governments [2]. However, the physical and psychological stress among the general public has continued, given the continued occurrence of new confirmed cases. Psychological counseling services have reportedly increased after the emergence of COVID-19 due to the increased incidence of depression and excessive stress [3][4][5]. Previous studies also indicated the possibility of increased distress and psychological fatigue among the general public due to the escalation of governmental regulations [6].
In response, the Korean government announced "psychological quarantine" on March 6 for the psychological stress induced by COVID-19. Their National Trauma Center (http://nct.go.kr) provides "COVID-19 Integrated Psychological Support Group", "Ways to keep good mental health", and "Counseling services to patients and their parents" [7]. The Japanese Ministry of Health, Labor, and Welfare has implemented psychological treatment & support projects in response to COVID-19. They provide counseling in the form of a chat called "Consultation of the social network services (SNS) mind related to COVID-19" [8]. While these nationallevel mental health/stress policies should be implemented after properly identifying citizens' mental health conditions, stress, and needs concerning COVID-19, previous studies showed that the COVID-19 mental health policies in Korea and Japan lacked information about these needs [9,10]. People from different countries show different reactions to the COVID-19 pandemic due to their different sensitivities, government responses, and psychological support; thus, consideration of these diverse aspects is critical [11][12][13].
In recent years, Internet and smartphone usage has increased rapidly, and social media platforms function as new modern forms of communication. More than 60% of Koreans use the Internet for social networking, and more than 50% of Japanese reported using more than one SNS platform. As a major social media platform, previous studies analyzed the perceptions and emotional status of the public through Twitter, which showed that social media platforms could reflect users' emotional states [14].
This study analyzed the perceptions and emotions of Korean and Japanese citizens regarding COVID-19. It analyzed the frequency of words used in Korean and Japanese tweets related to COVID-19 and the corresponding changes in their interests. It also aimed to provide evidence to establish the COVID-19 mental health policies of both governments.
---
II. Methods
---
Study Design
This cross-sectional study analyzed Twitter posts (Tweets) from February 1, 2020 to April 30, 2020 to determine public opinion regarding the COVID-19 pandemic in Korea and Japan.
---
Data Collection
We collected data from Twitter (https://twitter.com/), a major social media platform in Korea and Japan, between February 1, 2020 and April 30, 2020. Search terms to collect tweets (posts on Twitter) included "corona (코로나)" in Korean and "corona (コロナ)" in Japanese. Python 3.7 Library (Beautifulsoup and GetOldTweet3) was used for data collection. Due to a large number of tweets in Japan, we limited the daily collected data to 50,000 tweets.
The number of tweets collected in Korea and Japan was 1,470,673 and 4,195,457, respectively. The collected tweets were then classified into words using morphology analysis, and nouns and hashtags were extracted. After the morphology analysis, duplicate and irrelevant words were removed. The final analysis included 1,244,923 and 3,706,366 tweets from Korea and Japan, respectively (Figure 1).
---
Statistical Analysis
After the data collection, we used three kinds of statistical analysis. First, we used the KR-WordRank analysis in Korea and frequency analysis in Japan. Given the difficulties related to the text-mining analysis of the Korean language due to the ambiguity of spacing words, the fitness of domain, and postposition such as "eun", "neum", "iga" [15], the KR-WordRank method, used widely in previous studies, was selected for analysis. The KR-WordRank is one of the textmining approaches which performs unsupervised word segmentation. It can be divided into the exterior boundary value (EBV), which represents the probability of words around the central word, and the interior boundary value
---
Opinion of COVID-19 on Twitter
(IBV), which shows the cohesion of continuation characters related to the central word. After the EBV is derived and reinforced through the EBV of neighboring words, each EBV is calculated, reinforced by the relationship between the words, and mutually strengthened through the EBV of adjacent words. In contrast, IBV is scored by words' importance using mutual information (MI), which calculates the continuous probabilities of characters. Through this process, the KR-WordRank ranks by their importance in the network. For Japanese tweets, frequency analysis was more suitable because Japanese words are easily recognized. Moreover, frequency analysis has the advantages of high computation speed and easy access. Through the analysis, we estimated the changes of frequency using each word over time. Following the KR-WordRank analysis, the data from February 1, 2020 to April 30, 2020 were visually represented through the "heat diagram".
Second, we used "Word Cloud" to analyze word frequency from February 1, 2020 to April 30, 2020 in Korea and Japan. Third, we analyzed the rank flowcharts by categorizing the words into four types (social distancing, prevention, issue, and emotion) in both Korea and Japan.
---
III. Results
---
Crawling Data Characteristics
This study collected a total of 2,965,770 tweets, including 1,470,313 tweets from Korea, and 4,195,457 tweets from Japan. Since we had limited the daily tweets in Japan to 50,000, 250,000 tweets were the maximum data collected in 5 days. In Korea, 371,051 corona-related tweets from February 21, 2020 to March 1, 2020 accounted for 25.2% of all tweets. On the other hand, 13,943 tweets from March 12-16 accounted for 0.9%, which was the lowest (Table 1). https://doi.org/10.4258/hir.2020.26.4.335
---
Heat Diagram
Figures 2 and3 present the word trend in Korea and Japan for every 5 days from February 1, 2020 to April 30, 2020. In Korea, the words "COVID-19 (코로나)" and "News (뉴스)" were consistently high since February, while "MERS (메르 스)" appeared on Twitter until February 10 and then disappeared. "Shincheonji (신천지)" first appeared on February 15 and continued to rank high until April 30. "Travel (여행)" was highly ranked on February 5 but disappeared after February 20. "Online (온라인)" first appeared on April 5, and its rank increased gradually until April 30 (Figure 2). In Japan, "COVID-19 (Corona)", "Impact (影響)", "Mask (マスク)", "China (中国)", "Response (対応)", "Economy (経 済)", and "Government (政府)" continuously ranked high from February 5 to April 30. The word "Olympics (オリンピ ック)" was not in the rankings from February 29 to March 10 but gradually increased from March 15 to April and became a high ranked word in April. The rank of "Washing hands (手洗い)" decreased from March, but increased again from April 20 (Figure 3).
---
Word Cloud
Figure 4 presents the results of the word cloud analysis of the Korean and Japanese tweets from February 1 to April 30, 2020. In Korea, "COVID-19", "Shinchonji", "Mask", "Daegu", and "Travel" occurred frequently. In Japan, "COVID-19", "Mask", "Test", "Impact", and "China" were identified as highfrequency words.
---
Rank Flowchart
We analyzed the rank flowcharts of the Korean and Japanese tweets from February 1, 2020 and divided them into four categories, namely, social distancing, prevention, issue, and emotion (Figure 5).
---
1) Social distancing
In Korea, the rank of "World" increased from March 4, while the rank of "Travel" decreased after February 26. Since April, the word "Online" appeared and continuously increased in rank. In Japan, the rank of "Going out" and "Home" continuously increased from February 13 and February 19, while the rank of "Postpone" decreased.
---
2) Prevention
In Korea, the rank of "Mask" was consistently high. The rank of "Prevention" decreased after February 12 and increased again after April 15. In Japan, the rank of "Mask" was consistently high, while the ranks of "Washing hands" and "Disinfection" decreased. The rank of "Prevention" began to increase after March 25.
3) Issue In Korea, "Shinchonji" and "Daegu" continuously ranked high from February 19. The word "Donation" decreased from March 4 to March 25, after which it increased. Additionally, the ranking of "Economy" showed an upward trend since March 25. In Japan, "Economy" increased since February and reached the highest rank on March 11. The word "Olympics" rapidly decreased since February 26, and then increased from March 11.
---
4) Emotion
In Korea, "Government" was ranked 10th, "Overcome" continued to increase after February 26, and the rank trend for "Support" changed from decreasing to increasing from February 12 to March 16. In Japan, the rank of "End" rose since February and ranked in the top 10, while "Worry" and "Anxiety" decreased from April 1.
---
IV. Discussion
This study analyzed the perceptions and emotions of Korean and Japanese citizens about COVID-19 to gain insight for future COVID-19 responses.
There was a difference in the number of tweets between Korea and Japan. The final analysis included 1,470,313 tweets from Korea and 4,195,457 tweets from Japan; the daily Japanese tweets were limited because of their volume. Most Japanese citizens mainly use Twitter as SNS [16] rather than Facebook or Instagram, whereas the ranking of Twitter usage rate by Koreans is 7th (0.2%), which is relatively lower than Japan [17]. This may be due to the difference in populations: the current population of Japan (126,264,931) is 2.4 times higher than Korea (51,709,098) [18].
Based on the heat diagram analysis of the tweets, the words "Online", "Economy", and "Donation" had gradually reached a high rank since April 2020. Beginning March 1, elementary, middle, and high schools were temporarily closed and moved to online classes, which possibly affected the high ranks of "Online". The frequency of the word "Online" continuously increased after the Korean Ministry of Education announced an online education system on March 31 [19].
Citizens showed an interest in the economy, which is one of the major effects of COVID-19 in Korea. The economic growth rate (-0.1%) had declined after COVID-19 compared to 2019 (2.0%). This also affected Koreans' perceptions of the economy, reporting it as the most difficult economic time since the International Monetary Fund (IMF) intervention in 1998 [20].
The words "Travel" and "Postpone" were ranked high at the beginning of February 2020, but their ranks gradually decreased. "Travel" began to be rarely mentioned in tweets and disappeared from the rank list after February 25, indicating that Koreans had changed their opinions regarding traveling. The rapid spread of COVID-19 from the last week of February might have affected people's interest in traveling. The number of flights and travelers had sharply declined to 70.8%, which provided an additional explanation for this trend [21]. Regarding "Postpone," school reopening and events were postponed for 1 to 2 months in the early stages of COVID-19; however, due to their continuous delay [19], people may have begun to lose interest in it.
In Japan, "COVID-19 (Corona)", "Impact (影響)", "Mask (マスク)", "China (中国)", "Response (対応)", "Economy (経済)", and "Government (政府)" were highly ranked be- https://doi.org/10.4258/hir.2020.26.4.335 tween February 5 and April 30. The Japanese government distributed two face masks per household on April 17, which generated significant public opinion and possibly affected the high rank of the word "Mask". Similar to Korea, Japanese citizens showed interest in their economy, reflecting their difficult economic situation compared to 2019. The cancellation of the Olympics by Japan, the host country, may explain the continued increase in the rank of "Olympics". The rank of "Washing hands" decreased in March and then again increased since April, indicating people's interest in personal non-pharmaceutical interventions (NPIs). The Japanese government emphasized isolation and strict social distancing until March, and then promoted personal NPIs since April [22]. We divided the words into four categories, namely, social distancing, prevention, issue, and emotion, to analyze the rank flowcharts. Regarding social distancing, the word "World" began to increase in Korea since March 4, 2020, which is close to the time of a pandemic declaration by the World Health Organization (WHO). In Japan, contrary to the decreasing number of indicators related to going outside (traffic volume, using of public transportation, etc.), the
---
Opinion of COVID-19 on Twitter
word "going out" had continuously increased since February 19th. Considering the request to stay home and close schools, the rank for the word "Postpone" continued to decrease and "Home" began to increase. Regarding prevention, the high rank of "Mask" in both countries can be explained by the high compliance of wearing masks in Korea and Japan (Korea 78.7%; Japan 77.0%) [23]. In April, the words related to personal prevention began to rank higher compared to March, which showed a decline in ranks, thereby indicating citizens' increased interest in personal NPIs. This trend may be based on changes in policies of both governments, which shifted focus to personal hygiene from strict social distancing.
In Korea, after the government announced the disaster relief plan and people began donating for COVID-19 eradication, the rank of "Economic" and "Donation" increased on March 25. In contrast, the rank of "Olympics" in Japan decreased from March 25, the day Japan postponed the Olympics scheduled for 2020. The word "Store" emerged in the rank list from March 4. Some infected cases emerged in several stores during the first week of March, and Japanese local governments began to request stores to close temporarily from April, which may have affected this trend [24].
Concerning emotion, the rank of "Please" consistently increased in Korea. A previous study showed that people with high compliance of personal prevention experienced high psychological stress due to those who did not maintain the preventive practices [25], which might have affected the increasing numbers of tweets with requests to maintain preventive measures. Furthermore, the word "Please" was also associated with the wish to end the COVID-19 pandemic. We also observed the increasing ranks of "Overcome" and "Support", which reflected the current economic slump. In Japan, the citizens expected the COVID-19 pandemic to end since they mentioned the word "End" frequently. In contrast, the mention of the words "Worry (心配)" and "Anxiety (不 安)" decreased from April, which showed Japanese adaptations and insensibilities toward COVID-19. The WHO issued warnings about the possibility of a second pandemic after several countries eased their policies and citizens began to relax regarding COVID-19 [26].
This study has some limitations. First, it did not represent all age groups because the SNS was mainly used by the younger generation compared to the elderly generation. Second, although there are several SNS platforms such as Facebook, Never, Yahoo, Twitter, Instagram, KakaoTalk, and Band, we only collected posts from Twitter because of the API permission.
To minimize non-sampling error, we limited data collection to twitter although Naver in Korea and Yahoo in Japan are the most popular website. Future studies should analyze posts from various SNS to sufficiently represent the public opinion of each country.
In conclusion, this study analyzed the perceptions and
---
Hocheol Lee et al https://doi.org/10.4258/hir.2020. 26.4.335 emotions of Korean and Japanese citizens about COVID-19 to gain insight for future COVID-19 responses. The words in high frequencies were COVID-19, Shincheonji, Mask, Daegu and Travel in Korea, and COVID-19, Mask, Inspection, Capability and China in Japan. In both countries, CO-VID-19 and masks were frequently searched. As result of the rank flowchart, we observed that peoples' interests in the economy were high in both countries, which showed their worries about the economic downturn as a result of CO-VID-19 on twitter. Although interest in prevention increased since April in both countries, it also showed that the general public began to assuage their worries regarding COVID-19. We strongly suggest that psychological support strategies should be established in consideration of their various aspects of emotions.
---
Conflict of Interest
No potential conflict of interest relevant to this article was reported. | 17,489 | 1,780 |
2fab94d8625d96c4661b60a4b7bd3bb2869e03dc | Creating the Future Together: Toward a Framework for Research Synthesis in Entrepreneurship | 2,014 | [
"JournalArticle"
] | Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal ?If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. | INTRODUCTION
Broadly defined, entrepreneurship involves efforts to bring about new economic, social, institutional or cultural environments (Rindova, Barry, & Ketchen, 2009). Since Schumpeter's (1911Schumpeter's ( , 1942) ) pioneering work, entrepreneurship has become widely acknowledged as the key driver of the market economy. Yet, entrepreneurship research as a scholarly discipline is relatively young, and several attempts toward developing a coherent entrepreneurship 'research paradigm' have been made (e.g., Davidsson, 2003;Katz & Gartner, 1988;Sarasvathy, 2001;Shane & Venkataraman, 2000;Shane, 2003;Stevenson & Jarillo, 1990). In this respect, the landscape of entrepreneurship research is still to a large extent multi-paradigmatic in nature, including fundamentally different perspectives on what entrepreneurship is, how entrepreneurial opportunities are formed, what determines the performance of new ventures, and so forth (Ireland, Webb, & Coombs, 2005;Leitch, Hill, & Harrison, 2010;Zahra & Wright, 2011). This results in widespread confusion and frustration among entrepreneurship researchers regarding the lack of convergence toward a single paradigm and the continuing lack of definitional clarity (Davidsson, 2008;Ireland et al., 2005). Shane's (2012) and Venkataraman et al.'s (2012) reflections on the 2010 AMR decade award for their article "The promise of entrepreneurship as a field of research" (Shane & Venkataraman, 2000), as well as the subsequent debate, illustrate the disagreement on key paradigmatic issues among prominent entrepreneurship researchers. These differences are not only academic in nature, but also have profound practical implications. For instance, the narrative-constructivist notion of transformation implies that entrepreneurs should focus on acting and experimenting rather than trying to predict the future, as they cannot acquire valid knowledge about uncertain and partly unknowable environments (e.g., Sarasvathy, 2001;Venkataraman et al., 2012). By contrast, other researchers advocate that entrepreneurs should predict carefully, using comprehensive analysis and systematic procedures, before engaging in entrepreneurial activities (e.g., Delmar & Shane, 2003).
Fundamentally different perspectives on the phenomenon of entrepreneurship together may provide a deeper and broader understanding than any single perspective can do. However, different ontological and epistemological points of view are also difficult to reconcile and may have diverging implications (Alvarez & Barney, 2010;Leitch et al., 2010). In this paper, we seek to respect the distinct research paradigms currently existing in the field of entrepreneurship, rather than attempt to reconcile highly different assumptions. We start from the idea that the future development of the field of entrepreneurship, as a body of evidence-based knowledge, largely depends on building platforms for communication and collaboration across different paradigms as well as across the practice-academia divide (cf. Argyris, Putnam, & McLain Smith, 1985;Frese, Bausch, Schmidt, Strauch, & Kabst, 2012;Romme, 2003;Rousseau, 2012). In this paper we draw on the literature on mechanism-based explanations (e.g., Gross, 2009;Hedström & Ylikoski, 2010;Pajunen, 2008) to introduce a mechanism-based research synthesis framework that involves outcome patterns, mechanisms and contextual conditions. Moreover, we illustrate how this framework can synthesize research across different entrepreneurship paradigms. This paper contributes to the literature on entrepreneurship research methods (e.g., Davidsson, 2008;Frese et al., 2012;Ireland et al., 2005) as well as the literature on balancing the scientific and practical utility of research (Corley & Gioia, 2011;Van de Ven, 2007;Van de Ven & Johnson, 2006), by developing a coherent approach that enhances the practical relevance of scholarly work. Defining and developing a research synthesis framework is essential to this endeavor. The framework developed in this paper serves to review and synthesize a dispersed body of research evidence in terms of outcome patterns, contextual conditions and social mechanisms. As such, this paper may also spur a dialogue on the plurality of the entrepreneurship field's ontology, epistemology and research methods, and thus advance it as a scholarly discipline and professional practice.
The argument is organized as follows. First, we discuss three modes of studying entrepreneurship that have emerged in the literature: the positivist, narrative and design research mode. Subsequently, a mechanism-based framework for research synthesis across the three research modes is introduced. A synthesis of the fragmented body of literature on opportunity perception, exploration and exploitation then serves to demonstrate how this framework can be applied and can result in actionable insights. Finally, we discuss how the research synthesis framework developed in this paper serves to connect entrepreneurship theory and practice in a more systematic manner, in order to build a cumulative body of knowledge on entrepreneurship.
---
THREE MODES OF ENTREPRENEURSHIP RESEARCH
The field of entrepreneurship research is multi-disciplinary and pluralistic in nature. It is multidisciplinary in terms of the economic, psychological, sociological, and other theories and methods it draws upon. More importantly, the pluralistic nature of the current landscape of entrepreneurship research arises from three very different modes of engaging in entrepreneurship research, labeled here as the positivist, narrative and design mode. Table 1 outlines the main differences and complementarities of these research modes.
The logical positivist research mode starts from a representational view of knowledge, and looks at entrepreneurial phenomena as (relatively objective) empirical objects with well-defined descriptive properties studied from an outsider position (e.g., Davidsson, 2008;Katz & Gartner, 1988). Shane and Venkataraman's (2000) seminal paper exemplifies the positivist mode by staking out a distinctive territory for entrepreneurship (with the opportunity-entrepreneur nexus as a key notion) that essentially draws on mainstream social science. Most entrepreneurship studies published in leading journals draw on positivism, by emphasizing hypothesis testing, inferential statistics and internal validity (e.g., Coviello & Jones, 2004;Haber & Reichel, 2007;Hoskisson, Covin, Volberda, & Johnson, 2011;Welter, 2011).
The narrative mode draws on a constructivist view of knowledge, assuming it is impossible to establish objective knowledge as all knowledge arises from how entrepreneurs and their stakeholders make sense of the world (Cornelissen & Clarke, 2010;Leitch et al., 2010). The nature of scholarly thinking here is imaginative, critical and reflexive, in order to cultivate a critical sensitivity to hidden assumptions (Chia, 1996;Gartner, 2007aGartner, , 2007b)). Therefore, studies drawing on the narrative mode typically focus on qualitative data, for example in the form of case studies or grounded theory development. Whereas the positivist mode emphasizes processes at the level of either the individual entrepreneur or the configuration of the social context and institutional outcomes (Cornelissen & Clarke, 2010), researchers drawing on the narrative mode acknowledge the complexity of entrepreneurial action and sense-making in its broader context (e.g., Downing, 2005;Garud & Karnøe, 2003;Hjorth & Steyaert, 2005). As such, a key notion in the narrative tradition is the notion of (entrepreneurial) action and sensemaking as genuinely creative acts (e.g., Berglund, 2007;Chiles, Bluedorn, & Gupta, 2007;Foss, Klein, Kor, & Mahoney, 2008;Sarasvathy & Dew, 2005). Appreciating the authenticity and complexity of these acts is thus given precedence over the goal of achieving general knowledge. An example of this type of work is Garud and Karnøe's (2003) study of technology entrepreneurship in the area of wind turbines in Denmark and the US.
The design mode draws on Herbert Simon's (1996) notion of a science of the artificial, implying that entrepreneurial behavior and outcomes are considered as largely artificial (i.e., human made) in nature (Sarasvathy, 2004). As such, entrepreneurial behavior and accomplishments are considered as tangible or intangible artifacts with descriptive as well as imperative (although possibly ill-defined) properties. Consequently, entrepreneurship researchers need to "actually observe experienced entrepreneurs in action, read their diaries, examine their documents and sit in on negotiations" and then "extract and codify the 'real helps' of entrepreneurial thought and action" (Sarasvathy & Venkataraman, 2011, p. 130) to develop pragmatic tools and mechanisms that can possibly be refined in experimental work. The rise of 'scientific' positivism almost completely drove the design mode from the agenda of business schools (Simon, 1996), but design thinking and research have recently been regaining momentum among entrepreneurship researchers (e.g., Dew, Read, Sarasvathy, & Wiltbank, 2009;Sarasvathy, 2003Sarasvathy, , 2004;;Van Burg, Romme, Gilsing, & Reymen, 2008;Venkataraman et al., 2012). Although the initial work of Simon is often considered as having a strong positivist stance, the design research discourse has subsequently developed into a research mode that focuses on how people construct tangible and intangible artifacts, which embraces both positivist and constructivist approaches (Cross, 2001;Romme, 2003). Table 1 provides a more detailed account of each research mode.
---------------Insert Table 1 about here-----------------As can be inferred from Table 1, each research mode may share characteristics with another one. For example, studies drawing on the design mode often also draw on constructivist perspectives on knowledge (e.g., Dew et al., 2009;Van Burg et al., 2008) that are at the center of the narrative perspective. However, the overall purpose of design research is a pragmatic one (i.e., to develop actionable knowledge), whereas the main purpose of narrative research is to portray and critically reflect. The overall purpose driving each research mode strongly affects the assumptions made about what scholarly knowledge is, how to engage in research, and so forth (see Table 1).
In this respect, each research mode can be linked to one of the 'intellectual' virtues or modes identified by Aristotle: episteme, techne and phronesis. Following Flyvbjerg (2001), the intellectual mode of episteme draws on universal, invariable and context-independent knowledge and seeks to uncover universal truths (e.g., about entrepreneurship). Episteme thus thrives on the positivist idea that knowledge represents reality, and as such, it draws on denotative statements regarding the world as-it-is. Evidently, the mainstream positivist mode in entrepreneurship research largely exploits and advances the intellectual mode of episteme. By contrast, the narrative mode mainly draws on phronesis, which involves discussing and questioning the values and strategies enacted in a particular setting (e.g. the values and strategy that drive a new venture). A key role of phronesis thus is to provide concrete examples and detailed narratives of the ways in which power and values work in organizational settings (Cairns & Śliwa, 2008;Flyvbjerg, 2001). Finally, techne refers to pragmatic, variable and context-dependent knowledge that is highly instrumental (Flyvbjerg, 2001), for example, in getting a new venture started. This is the intellectual mode that is strongly developed among experienced entrepreneurs, who leverage their own expertise and competences and get things done in a pragmatic 'can-do' manner (cf. Sarasvathy, 2001).
Aristotle's three intellectual modes appear to be essential and complementary assets to any attempt to create an integrated body of scholarly and pragmatic knowledge on entrepreneurship.
Consequently, the three research modes outlined in Table 1 can be positioned as complementary resources in an integrated body of knowledge. This raises the question how research findings arising from the positivist, narrative and design modes can be combined in a cumulative body of knowledge on entrepreneurship.
---
MECHANISM-BASED RESEARCH SYNTHESIS
The future development of the field of entrepreneurship largely depends on efforts to combine and synthesize contributions from all three modes in Table 1, to be able to develop a body of evidence-based and actionable knowledge. In this section, we describe a framework for research synthesis. In doing so, we seek to respect the uniqueness and integrity of each of the three modes outlined in Table 1, rather than comparing and possibly integrating them.
The literature on evidence-based management, and more recently evidence-based entrepreneurship, has been advocating the adoption of systematic review and research synthesis methods ( e.g., Denyer & Tranfield, 2006;Denyer, Tranfield, & Van Aken, 2008;Rousseau, 2006;Rousseau, Manning, & Denyer, 2008) and quantitative meta-analyses (Frese et al., 2012). Briner and Denyer (2012) recently argued that systematic review and research synthesis tools can be distinguished from prevailing practices of reviewing and summarizing existing knowledge in management -such as in textbooks for students, literature review sections in empirical studies, or papers focusing on literature review. The latter practices tend to motivate reviewers to be very selective and emphasize 'what is known' rather than 'what is not known'; reviewers also tend to cherry-pick particular findings or observations, possibly producing distorted views about the body of knowledge reviewed (Briner & Denyer, 2012;Geyskens, Krishnan, Steenkamp, & Cunha, 2009). Therefore, systematic review and research synthesis methods should be instrumental in synthesizing the literature, by drawing on systematic and transparent procedures (Briner & Denyer, 2012).
Quantitative meta-analysis serves to systematically accumulate evidence by establishing the effects that are repeatedly observed and cancelling out weaknesses of individual studies, but there always remains a gap between knowledge and action (Frese et al., 2012). Essentially, a meta-analysis can deliver well-validated and tested predictions of a phenomenon as the regular outcome of the presence/absence of a number of antecedents, without explaining why this phenomenon occurs (cf. Hedström & Ylikoski, 2010;Woodward, 2003). Here, qualitative review and research synthesis protocols, as extensively described and discussed elsewhere ( e.g., Denyer & Tranfield, 2006;Denyer et al., 2008;Tranfield, Denyer, & Smart, 2003), have a key complementary role in explaining the contextual contingencies and mechanisms through which particular experiences, perceptions, actions or interventions generate regular or irregular outcomes (Briner & Denyer, 2012). Therefore, we draw on mechanism-based explanation to develop a broadly applicable perspective on research synthesis in entrepreneurship.
A large and growing body of literature in a wide range of disciplines, ranging from biology to sociology and economics, draws on the 'mechanism' notion to explain phenomena (Hedström & Ylikoski, 2010). Basically, mechanisms are defined as something that explains why a certain outcome is produced in a particular context. For instance, organization theorists use the mechanism of 'escalation of commitment' to explain ongoing investments in a failing course of action (Pajunen, 2008) and mechanism-based explanations have also gained some foothold elsewhere in management and organization studies (Anderson et al., 2006;Davis & Marquis, 2005;Durand & Vaara, 2009;Pajunen, 2008;Pentland, 1999). In particular, studies drawing on a critical realist perspective (cf. Bhaskar, 1978;Sayer, 2000) have used the notion of mechanism to bridge and accumulate insights from different philosophical perspectives (Kwan & Tsang, 2001;Miller & Tsang, 2011;Reed, 2008;Tsoukas, 1989). This focus on abstract mechanisms is relatively agnostic about the nature of social action (Gross, 2009) and thus can steer a path between positivist, narrative and design perspectives on research.
In the remainder of this paper, we therefore start from the idea that research synthesis serves to identify mechanisms within different studies and establish the context in which they produce a particular outcome (Briner & Denyer, 2012;Denyer et al., 2008;Tranfield et al., 2003;Rousseau et al., 2008). We build on mechanism-based work in sociology that draws on a pragmatic notion of mechanisms (Gross, 2009) and thus avoids the ontological assumptions of critical realism which some have criticized (Hedström & Ylikoski, 2010;Kuorikoski & Pöyhönen, 2012). The literature on pragmatism has identified the so-called 'philosophical fallacy' in which scholars consider categories (e.g., the layered account of reality in critical realism) as essences, although these are merely nominal concepts that have been created to help solve specific problems (Dewey, 1929;Hildebrand, 2003;Kuorikoski & Pöyhönen, 2012). This fallacy causes conceptual confusion, in the sense that both (critical) realists and anti-realists may not appreciate the integrative function and identity of inquiry, which leads them to create accounts of knowledge that project the products of extensive abstraction back onto experience (Hildebrand, 2003).
Although there is some variety in the definition and description of mechanisms, the following four characteristics are almost always present (Hedström & Ylikoski, 2010;Pawson, 2002;Ylikoski, 2012). First, a mechanism explains how a particular outcome or effect is created.
Second, a mechanism is an irreducible causal notion, referring to how the participating entities (e.g., entrepreneurs or managers) of a process (e.g., decision-making) generate a particular effect (e.g., ongoing investments in a failing course of action). In some cases, this mechanism is not directly observable (e.g., the market mechanism). Third, mechanisms are not a black box, but have a transparent structure or process that makes clear how the participating entities produce the effect. For instance, Pajunen (2008) demonstrates how an 'escalation of commitment' mechanism consists of entities (e.g., decision makers) that jointly do not want to admit the lack of success of prior resource allocations to a particular course of action and therefore decide to continue this course of action. Fourth, mechanisms can form a hierarchy; while parts of the structure of the mechanism can be taken for granted at one level, there may be a lower-level mechanism explaining them. In the escalation of commitment example, Pajunen (2008) identified three underlying mechanisms: (1) managers assure each other that the past course of action is still the correct one; (2) the owners of the company promote the ongoing course of action and issue bylaws that make divestments more difficult; (3) creditors fund the continuation of the (failing) course of action by granting more loans. In sum, a well-specified mechanism is a basic theory that explains why particular actions, beliefs or perceptions in a specific context lead to particular outcomes.
To capture the variety of micro-to-macro levels at which mechanisms can operate in the social sciences, Hedström and Swedberg (1996) Third, mechanisms at a collective level describe how individuals collectively create a particular outcome. Yet, multiple mechanisms can co-produce a particular outcome at a certain level and in a given context. To identify the correct and most parsimonious mechanisms, counterfactual or rival mechanisms need to be considered (Durand & Vaara, 2009;Woodward, 2003;Ylikoski, 2012). By exploring and/or testing different alternative scenarios, that have varying degrees of similarities with the explanatory mechanism proposed, one can assess and establish to what extent this mechanism is necessary, sufficient, conditional and/or unique. For instance, by explicitly contrasting two rival mechanism-based explanations, Wiklund and Shepherd (2011) established experimentation as the mechanism explaining the relationship between entrepreneurial orientation and firm performance.
Clearly, even a mechanism-based explanation does not resolve the paradigmatic differences outlined in Table 1 (cf. Durand & Vaara, 2009), nor is it entirely ontologically and epistemologically neutral. As such, the framework for research synthesis outlined in the remainder of this section may be somewhat more sympathetic toward representational and pragmatic than the constructivist-narrative view of knowledge, particularly if the latter rejects every effort at developing general knowledge (Gross, 2009). Nevertheless, our framework does create common ground between all three perspectives on entrepreneurship by focusing on outcome patterns, social mechanisms as well as contextual conditions.
---
Outcome Patterns
An idea that cuts across the three literatures outlined in Table 1 is to understand entrepreneurship as a societal phenomenon involving particular effects or outcome patterns. That is, merely contemplating radically new ideas or pioneering innovative pathways as such do not constitute 'entrepreneurship' (Davidsson, 2003;Garud & Karnøe, 2003;Sarasvathy, Dew, Read, & Wiltbank, 2008). Accordingly, entrepreneurship must also include empirical observable outcome patterns such as, for example 'wealth or value creation' (Davidsson, 2003), 'market creation' (Sarasvathy et al., 2008), 'creating new options' (Garud & Karnøe, 2003), or creating new social environments (Rindova et al., 2009). A key assumption here is that there are no universal truths or straightforward causalities in the world of entrepreneurship. What works well in a new venture in the professional services industry may not work at all in a high-tech startup. Thus, we need to go beyond a focus on simple outcome regularities, as there might be different ̶ possibly unobserved ̶ factors (e.g., conditions and mechanisms) influencing the mechanisms at work (Durand & Vaara, 2009). The aim is to establish causal explanations that have the capacity or power to establish the effect of interest (Woodward, 2003). Therefore, research synthesis focuses on (partly) successful or unsuccessful outcome patterns, which can be characterized as so-called 'demi-regularities' in the sense that they are more than randomly produced, although countervailing factors and human agency may also prevent the outcome (Lawson, 1997;Pawson, 2006).
---
Social Mechanisms
As previously argued, mechanisms explain why particular outcome patterns occur in a particular context. Many scholars connect social mechanisms to Merton's theories of the middle range that "lie between the minor but necessary working hypotheses that evolve in abundance during dayto-day research and the all-inclusive systematic efforts to develop a unified theory that will explain all the observed uniformities of social behavior, social organization and social change" (Merton, 1968: 39; see Hedström & Ylikoski, 2010;Pawson, 2000). Thus, mechanisms do not aim to describe the causal process in a very comprehensive, detailed fashion, but depict the key factors and processes that explain the essence of an outcome pattern. Considering mechanisms as middle-range theories also highlights that mechanisms are not necessarily empirical observable and that conceptual and theoretical work may be needed to identify the mechanisms explaining why certain outcomes are observed in a particular context. Social mechanisms in the context of entrepreneurship research involve theoretical explanations, for example, learning in the area of opportunity identification (Dimov, 2007), the accumulation of social capital in organizational emergence (Nicolaou & Birley, 2003), fairness perceptions in cooperation processes (e.g., Busenitz, Moesel, Fiet, & Barney, 1997) or effectuation logic in entrepreneurial decision making (Sarasvathy, Forster, & Ramesh, 2013).
Social mechanisms are a pivotal notion in research synthesis because a coherent and integrated body of knowledge can only begin to develop when there is increasing agreement on which mechanisms generate certain outcome patterns in particular contexts.
---
Contextual Conditions
A key theme in the literature is the heterogeneity and diversity of entrepreneurial practices and phenomena (e.g., Aldrich & Ruef, 2006;Davidsson, 2008;Shane & Venkataraman, 2000). In this respect, Zahra (2007) argues a deeper understanding is needed of the nature, dynamics, uniqueness and limitations of the context of these practices and phenomena. Contextual conditions therefore are a key dimension of the framework for research synthesis proposed here.
In this respect, how mechanisms generate outcome patterns is contingent on contextual or situational conditions (Durand & Vaara, 2009;Gross, 2009). For example, continental European universities operating in a social market economy offer very different institutional, economic and cultural conditions for creating university spin-offs than their US counterparts. In particular, European universities that want to create university spin-offs need to support and facilitate the mechanism of opportunity perception and exploitation much more actively than their American counterparts (e.g., Van Burg et al., 2008).
Contextual conditions operate by enabling or constraining the choices and behaviors of actors (Anderson et al., 2006;Pentland, 1999). Agents typically do have a choice in the face of particular contextual conditions, even if these conditions bias and restrict the choice. For example, a doctoral student seeking to commercialize her research findings by means of a university spin-off may face more substantial cultural barriers in a European context than in a US context (e.g., her supervisors may find "this is a dumb thing to do for a brilliant researcher"), but she may decide to push through these barriers. Other types of contextual conditions more forcefully restrict the number of options an agent can choose from; for example, particular legal constraints at the national level may prohibit universities to transfer or license their intellectual property (IP) to spin-offs, which (for the doctoral student mentioned earlier) eliminates the option of an IP-based startup. In general, the key role of contextual conditions in our research synthesis framework serves to incorporate institutional and structurationist perspectives (DiMaggio & Powell, 1983;Giddens, 1984) that have been widely applied in the entrepreneurship literature (e.g., Aldrich & Fiol, 1994;Battilana, Leca, & Boxenbaum, 2009;Garud, Hardy, & Maguire, 2007).
---
THE DISCOVERY AND CREATION OF OPPORTUNITIES
We now turn to an example of research synthesis based on this framework. In this section we synthesize previous research on entrepreneurship drawing on the notion of "opportunity". This substantial body of literature is highly interesting in the context of research synthesis, because the positivist, narrative and design mode have been used to conduct empirical work in this area (cf. Dimov, 2011). Moreover, Alvarez and colleagues (Alvarez & Barney, 2007, 2010;Alvarez, Barney, & Young, 2010) recently reviewed a sample of both positivist and narrative studies in this area and concluded these studies draw on epistemological assumptions that are mutually exclusive, which would impede "developing a single integrated theory of opportunities" (Alvarez & Barney, 2010, p. 558). While we agree with Alvarez and Barney that a single integrated theory based on a coherent set of epistemological assumptions (cf. Table 1) may not be feasible, our argument in the previous sections implies that key research findings arising from each of the three research modes outlined in Table 1 can be synthesized in a mechanism-based framework.
---
Review Approach
The key question driving the literature review is: which evidence-based insights can be inferred from the literature with regard to how and when entrepreneurs perceive and act upon opportunities? In view of the evidence-based nature of this question, the first step is to include only articles containing empirical studies. In a second phase, after the review of empirical studies, we also turn to related conceptual work. We selected articles that explicitly deal with opportunity perception and/or opportunity-based action. We used the ABI/Inform database and searched for articles in which "opportunity" and "entrepreneur*" or "opportunities" and "entrepreneur*" were used in the title, keywords or abstract. To be able to assess the potential consensus and capture the entire scope of epistemological perspectives in the literature, articles were not only selected from first tier entrepreneurship and management journals, but also from some other relevant journals. The articles were selected from Academy of Management Journal, Academy of Management Review, Administrative Science Quarterly, American Journal of Sociology, American Sociological Review, British Journal of Management, Entrepreneurship -------------Insert Table 2 and Table 3 about here -----------------To synthesize the findings, we read each article and coded key relationships between contextual conditions, social mechanisms and outcome patterns. In addition, we coded the theoretical and philosophical perspectives used by the authors, which showed 51 empirical articles predominantly draw on a positivist mode, 20 empirical articles follow the constructivenarrative mode, whereas 8 articles are within the design mode or are explicitly agnostic or pragmatic (see Table 3). Similar mechanisms, contexts and outcome patterns were subsequently clustered, which resulted in an overview of contextual conditions, social mechanisms and outcome patterns.
---
Synthesis Results
Table 4 provides -------------Insert Table 4 about here-----------------
---
Individual cognitive framing of opportunities
One of the most discussed mechanisms generating and directing opportunity perception and exploitation (as outcome pattern) is the individual's framing of the situation at hand, in light of existing knowledge and experience (Short, Ketchen, Shook, & Ireland, 2010). Many studies seek to understand this relationship, providing an in-depth understanding of the underlying social mechanisms and contextual conditions. Figure 1 provides an overview of the specific contexts, social mechanisms and outcome patterns.
----------------------Insert Figure 1 about here------------------------The general mechanism-based explanation here is that if an entrepreneur identifies or constructs an opportunity, (s)he most likely perceives and acts upon this opportunity if it is in line with his/her (perceived) prior experience and knowledge. Thus, an important contextual condition is formed by the amount and type of experience and knowledge. A second generic contextual condition are the external circumstances, such as technological inventions and changes in these circumstances, which individuals may frame as opportunities. Within these contextual conditions, a number of different social mechanisms explain the outcome patterns of perceiving one or more opportunities, perceiving particular types of opportunities, the degree of innovativeness and development of these opportunities, and finally whether and how people act upon the perceived opportunity.
Our review serves to identify three social mechanisms within the individual cognitive framing of opportunities. First, the type and amount of knowledge enables or constrains framing the situation at hand as an opportunity. In general, people with entrepreneurial experience are more likely than non-entrepreneurs to frame something as an opportunity (Palich & Bagby, 1995).
Higher levels of education and prior knowledge enhance the likelihood of identifying opportunities (Arenius & De Clercq, 2005; Ramos-Rodríguez, Medina-Garrido, Lorenzo-Gómez, & Ruiz-Navarro, 2010) and thus increase the number of opportunities identified (Smith, Matthews, & Schenkel, 2008;Ucbasaran, Westhead, & Wright, 2007, 2009;Westhead, Ucbasaran, & Wright, 2009) or lead to more innovative ones (Shepherd & DeTienne, 2005), while industry experience makes it more likely that people act upon perceived opportunities and start a venture (Dimov, 2010). More specifically, Shane (2000) showed the existing knowledge of entrepreneurs directs the type of opportunity identified for commercializing that specific technology (see also Park, 2005). This mechanism appears to have an optimum level, as too much experience can hinder the entrepreneur in identifying new promising opportunities (Ucbasaran et al., 2009). Beyond perceiving an opportunity, knowledge and experience also appear to direct the way in which opportunities are exploited (Dencker, Gruber, & Shah, 2009).
The underlying submechanism -explaining the cognitive framing mechanism -is that prior knowledge and experience facilitate recognizing patterns from snippets of information and 'connecting the dots' to ideate, identify and evaluate a meaningful opportunity (Baron & Ensley, 2006;Grégoire, Barr, & Shepherd, 2010;Van Gelderen, 2010).
The second social mechanism (see Figure 1) serves to explain that the individual's perception about his/her knowledge and abilities is also influential, as studies from a more narrativeconstructivist mode point out (Gartner, Shaver, & Liao, 2008), thus complementing the first mechanism. The third mechanism says that framing the situation at hand in light of existing knowledge and experience (as a mechanism) does not facilitate the process of identifying an opportunity if the situation does not match the entrepreneur's learning style (Dimov, 2007); this suggests the second and third mechanism have to operate together. Evidently, other contextual conditions and mechanisms, such as social network structure, also play a role (Arenius & De Clercq, 2005). In fact, the absence of social network structures can hinder the 'individual cognitive framing of opportunities' mechanism, as shown in a study of Finnish entrepreneurs whose lack of ties in the foreign market tend to hinder perception of internationalization opportunities, even when they have specific industry knowledge (Kontinen & Ojala, 2011).
After completing the review of empirical papers, we turned to related conceptual papers.
These papers provide a number of additional insights, which are not yet or only to a limited extent empirically studied. First, conceptual studies have put forward the additional mechanism of entrepreneurial alertness that explains why some entrepreneurs are more aware of opportunities than others (Baron, 2004;Gaglio & Katz, 2001;Tang, Kacmar, & Busenitz, 2012).
Second, entrepreneur's reasoning processes, including metaphorical, analogical and counterfactual reasoning, provide an additional mechanism that serves to explain how entrepreneurs come up with new opportunities (Cornelissen & Clarke, 2010;Gaglio, 2004).
Besides these two additional mechanisms, recent theorizing on the role of affect indicates that the feelings and moods of individuals form a contextual condition that influences alertness, experimentation and framing (Baron, Hmieleski, & Henry, 2012;Baron, 2008).
As a next step, we considered whether the social mechanisms identified are (e.g., hierarchical, sequential or parallel) dependent on each other, redundant or counterfactual, and whether there are likely any unobserved mechanisms (cf. Durand & Vaara, 2009;Hedström & Ylikoski, 2010).
With regard to the cluster of mechanisms pertaining to individual cognitive framing of opportunities, Figure 1 lists no counterfactual mechanisms but does display a number of parallel, partly overlapping mechanisms dealing with the amount of knowledge and experience, the perception about this knowledge and experience, and the domain-specificity of that knowledge and experience. As indicated by the underlying studies, however, these mechanisms are not sufficient to produce the outcome patterns, but require other mechanisms, such as social mediation. The 'perception about one's abilities' (Gartner et al., 2008) may be redundant because most other mechanisms identified in our review do not require that entrepreneurs are aware of their abilities. Further research has to establish whether this is the case.
- -----------------Insert Figure 2 about here--------------------
---
Socially situated opportunity perception and exploitation
Many studies show the individual entrepreneur´s social embeddedness in a context of weak and/or strong ties mediates the perception of opportunities. We identified multiple social mechanisms basically implying that people by being embedded in a context of social ties get access to new knowledge, ideas and useful contacts (e.g., Arenius & De Clercq, 2005;Bhagavatula, Elfring, Van Tilburg, & Van de Bunt, 2010;Jack & Anderson, 2002;Ozgen & Baron, 2007). Figure 2 summarizes the details of specific contexts, social mechanisms and outcome patterns. For instance, through the presence of social connections that exert explicit influence, such as in an incubator program, people can blend new and diverse ideas and obtain access to specialized resources and also get stimulated by others to become more aware of new opportunities, resulting in the perception of one or more opportunities (Cooper & Park, 2008;Stuart & Sorenson, 2003). A study of entrepreneurship in the windmill industry uncovered the same mechanism by showing that social movements co-shape the perception of opportunities and lead people to imagine opportunities of building and operating windmills (Sine & Lee, 2009). In addition, engaging in social contacts may influence opportunity perception; for instance, people interacting with coworkers that can draw on prior entrepreneurial experiences are more likely to perceive entrepreneurial opportunities themselves (Nanda & Sørensen, 2010). Moreover, networking activities of entrepreneurs, in combination with observing and experimenting, enable the mechanism of associational thinking (Dyer, Gregersen, & Christensen, 2008) and serve to jointly construct opportunities by combining and shaping insights, as studies in the narrative research mode particularly emphasize (e.g., Corner & Ho, 2010;Fletcher, 2006). The outcome pattern typically observed here is that (potential) entrepreneurs perceive one of more particular opportunities.
The social network context also affects the outcome pattern of opportunity exploitation. For instance, in a 'closed network' involving strong ties, the mechanism of acquiring resources from trusted connections can enable resource acquisition and result in better opportunity exploitation (Bhagavatula et al., 2010). Moreover, such ties can provide a new entrepreneur with the legitimacy of established parties and/or reference customers (Elfring & Hulsink, 2003;Jack & Anderson, 2002). In addition, the support and encouragement of entrepreneurs' social networks help entrepreneurs gain more confidence to pursue radically new opportunities (Samuelsson & Davidsson, 2009) or growth opportunities (Tominc & Rebernik, 2007).
However, these mechanisms can also hinder opportunity perceptions when shared ideas and norms constrain people in perceiving and exploiting radically new opportunities, as Zahra, Yavuz and Ucbasaran (2006) showed in a corporate entrepreneurship context. Contextual conditions such as geographic, psychic and linguistic proximity limit a person's existing network, which reduces the number and variation of opportunities that can be mediated by these social ties (Ellis, 2010). In addition, observations in the African context suggest strong family ties also bring many social obligations with them, which may hinder opportunity exploitation; being exposed to a diversity of strong community ties can counterbalance this effect (Khavul, Bruton, & Wood, 2009).
As a result, the mechanisms explaining positive effects of network ties (e.g., access to knowledge and resources leading to more opportunities and better exploitation) and those causing negative effects (e.g., cognitive lock-in and limited resource availability) appear to be antagonistic. However, the contexts in which these mechanisms operate may explain the divergent processes and outcomes, as diverse networks provide more and diverse information and resources, while closed networks can create a lock-in effect (see Martinez & Aldrich, 2011).
Yet, closed networks may also have positive effects, in particular on opportunity exploitation in a western context, through trust and resource availability. As there is a large body of empirical studies in this domain (Jack, 2010;Martinez & Aldrich, 2011;Stuart & Sorenson, 2007), an evidence-based analysis of the social mechanisms, their conditions and outcomes can be instrumental in explaining the remaining inconsistencies.
A subsequent review of conceptual work in this area shows that most conceptual arguments are firmly grounded in empirical work and as such in line with our synthesis of empirical studies of socially situated opportunity perception and exploitation. Yet, conceptual work serves to draw a broader picture, theoretically explaining both the positive and negative effects of social networks. For instance, conceptual work has used structuration theory to explain how social network structures both enable and constrain entrepreneurial opportunity perception as well as the agency of individuals to act upon those opportunities (Chiasson & Saunders, 2005;Sarason, Dean, & Dillard, 2006), thus highlighting that the social mechanisms of for instance limiting and providing access can be at work under the very same contextual (network) conditions. Moreover, the entrepreneur's social connections (as a contextual condition) are not stable, but are also subject to active shaping (e.g., Luksha, 2008;Mole & Mole, 2010;Sarason et al., 2006), thus putting forward a 'feedback loop' from the perception of an opportunity, via the mechanism of shaping the social connections, to a co-evolved social network which in turn influences opportunity perception and exploitation.
Figure 2 suggests some overlap and/or redundancy among several mechanisms. In particular, the legitimation and resource-or knowledge-provision mechanisms appear to co-operate, and are thus difficult to disentangle. Possibly, these social mechanisms operate in a sequential manner, when legitimacy of the entrepreneur and/or venture is a necessary condition for building trust with and obtaining access to the connection (e.g., a potential investor).
---
Practice-Oriented Action Principles
This literature synthesis illustrates that the social mechanisms and outcome patterns identified in different streams of literature can be integrated in a mechanism-based framework. We identified three empirically observed mechanisms and two theoretical mechanisms with regard to the directivity of knowledge and experience in perceiving, developing and exploiting opportunities (see Figure 1). With regard to the in-depth review of socially situated opportunity perception and exploitation, we found seven mechanisms operating in a diversity of contextual conditions (see Figure 2). Table 4 presents an overview of the entire set of prevailing contextual conditions, social mechanisms and outcome patterns in the literature on entrepreneurial opportunities. The philosophical perspectives adopted in the studies reviewed range from studying opportunities as actualized by individuals and constructed in social relationships and practices (Fletcher, 2006;Gartner et al., 2008;Hjorth, 2007) to opportunities arising from and shaped by technological inventions (e.g., Clarysse, Tartari, & Salter, 2011;Cooper & Park, 2008;Eckhardt & Shane, 2011;Shane, 2000). Nonetheless, social mechanisms such as the type of existing knowledge and outcome patterns such as opportunity type are consistent. This suggests the research synthesis framework proposed in this paper is largely agnostic to underlying assumptions, and serves to build a cumulative understanding of contextual conditions, social mechanisms and outcome patterns.
--------Insert Table 4 about here---------As a next step, we can develop practice-oriented products from this synthesis. Multiple studies have developed such practice-oriented products, for instance by codifying entrepreneurial principles for action (see Frese et al., 2012) or by developing design principles that are grounded in the available research evidence (e.g., Denyer et al., 2008). In the particular format proposed by Denyer et al. (2008), these design principles draw on a context-intervention-mechanism-outcome format, in which thus explicitly the intervention or action is described. In our research synthesis framework, the entrepreneurial action domain is captured by describing the boundaries of these actions in terms of contextual conditions, social mechanisms and outcome patterns. As such, highly idiosyncratic entrepreneurial actions within these (typically rather broad) boundaries are likely to be more effective in producing particular outcome patterns than those who fail to acknowledge these boundaries. Consequently, because the action space is specified one can develop specific action principles for practitioners such as entrepreneurs, policy makers, advisors or educators. To give an impression of how such a practical end-product of a mechanism-based synthesis looks like, we have transformed the findings with regard to 'individual cognitive framing of opportunities' and 'socially situated opportunity perception and exploitation' into a set of entrepreneur-focused action principles displayed in Table 5. Moreover, this table also provides some potential actions based on these principles, describing ways to trigger the social mechanism and/or change contextual conditions in order to influence the outcome pattern.
Overall, these action principles are evidence-based, in the sense that they are grounded in our research synthesis, but are not yet tested as such by practitioners in a specific context; in this respect, Denyer et al. (2008) have argued the most powerful action principles are grounded in the available research evidence as well as extensively field-tested in practice.
--------Insert Table 5 about here---------Similarly, other context-mechanism-outcome combinations can be transformed in principles for action, pointing at ways to adapt contextual factors or ways to establish or trigger the relevant mechanisms. Previous work on evidence-based management has not only described in detail how such principles for action can be codified, but has also demonstrated that well-specified and field-tested principles need to incorporate the pragmatic and emergent knowledge from practitioners (Van Burg et al., 2008;Van de Ven & Johnson, 2006). In this respect, the research synthesis approach presented in this paper merely constitutes a first step toward integrating actionable insights from very diverse research modes into context-specific principles that inform evidence-based actions.
---
DISCUSSION
Entrepreneurship theorizing currently is subject to a debate between highly different philosophical positions, for instance in the discourse on the ontology and epistemology of opportunities (Short et al., 2010). To conceptually reconcile the two positions in this debate, McMullen and Shepherd (2006) proposed a focus on entrepreneurial action that would make ontological assumptions less important. Entrepreneurial action is thus defined as inherently
---
Research Implications
An important benefit of the research synthesis framework presented in this paper is that it facilitates the synthesis of dispersed and divergent streams of literature on entrepreneurship. This framework does not imply a particular epistemological stance, such as a narrative or positivist one. If any, then the epistemological perspective adopted in this paper is rooted in a pragmatic view of the world that acknowledges the complementary nature of narrative, positivist and design knowledge (Gross, 2009;Romme, 2003).
Our proposal to develop a professional practice of research synthesis may also serve to avoid a stalemate in the current disagreement on key paradigmatic issues among entrepreneurship researchers (Davidsson, 2008;Ireland et al., 2005). Rather than engaging in a paradigmatic debate that possibly results in the kind of 'paradigm wars' that have raged elsewhere in management studies (e.g., Denison, 1996), a broad framework for research synthesis will be instrumental in spurring and facilitating a discourse on actionable insights dealing with 'what', 'why', 'when' and 'how' entrepreneurial ideas, strategies, practices and actions (do not) work. In particular, we advocate to build mechanism-based explanations for entrepreneurship phenomena.
Entrepreneurship studies need to go beyond establishing mere relationships, by exploring and uncovering the social mechanisms that explain why variables are related to each other, as recent calls for mechanism-based explanations of entrepreneurship phenomena also imply (Aldrich, 2010;Frese et al., 2012;McKelvie & Wiklund, 2010;Sarasvathy et al., 2013;Wiklund & Shepherd, 2011). A focus on social mechanisms not only serves to transcend paradigmatic differences, but also creates detailed explanations by identifying mechanisms and contrasting with counterfactuals. For instance, we observed similar mechanisms at work in a diversity of contexts in which an entrepreneur's knowledge and experience affect opportunity identification and exploitation. The literature in this area, although highly diverse in terms of its ontological and epistemological assumptions, is thus starting to converge toward a common understanding of how particular entrepreneurial contexts through certain social mechanisms generate particular outcome patterns.
Our framework also advances the literature on methods of research synthesis in evidencebased management. Early pioneers in this area have argued for a systematic collection of evidence regarding the effect of interventions in particular management contexts (Tranfield et al., 2003). Later work has introduced the notion of mechanisms, as an explanation of the effect of an intervention in a particular context (e.g., Denyer et al., 2008;Rousseau et al., 2008;Rousseau, 2012;Van Aken, 2004), mostly drawing on the critical realist synthesis approach developed by Pawson (e.g., Pawson, 2006). Our study highlights that the notion of mechanisms is central to overcome the fragmented nature of the field (see Denyer et al., 2008), and further develops this notion by adopting a pragmatic perspective on mechanisms that avoids the restrictive assumptions of (critical) realism, which makes it widely acceptable.
Moreover, and more importantly, the synthesis approach developed in this paper specifies how detailed mechanism-based explanations can be created by qualitative assessments of different types of mechanisms and their hierarchy, dependency and sequence, including an analysis of rival mechanisms or counterfactuals. Our synthesis also shows the importance of context-dependency of those mechanisms and thus provides an approach that responds to repeated calls for a better inclusion of context in theorizing and researching entrepreneurship (e.g., Welter, 2011;Zahra, 2007). A key task of any research synthesis is to take stock of what the existing body of knowledge tells about the context dependency of entrepreneurial action, thus informing a broader audience about why and how particular mechanisms produce an outcome in a particular context and not in others. Finally, the example of the synthesis of the 'entrepreneurial opportunity' literature demonstrates that mechanism-based synthesis can effectively combine fragmented findings arising from quantitative studies of cause-effect relations with those arising from studies using qualitative data to assess the impact of mechanisms and contexts.
---
Practical Implications
The research synthesis perspective developed in this paper serves to bridge the so-called 'relevance gap' between mainstream entrepreneurship science and entrepreneurial practice. In search of a research domain and a strong theory, entrepreneurship researchers have increasingly moved away from practically relevant questions (Zahra & Wright, 2011). This has led to an increased awareness of the scientific rationale of entrepreneurship research (Shane & Venkataraman, 2000), but also reinforced the boundaries between the science and practice of entrepreneurship and provoked an ongoing debate on epistemic differences. As our synthesis of the entrepreneurial opportunity literature illustrates, few studies adopt a pragmatic and actionable orientation with a clear focus on the processes of practicing entrepreneurs.
Meanwhile, policy fashions rather than empirical evidence or well-established theory tend to influence entrepreneurial behavior and public policy (Bower, 2003;Mowery & Ziedonis, 2004;Weick, 2001). Moreover, previous attempts to develop practice-oriented design recommendations from 'thick' case descriptions provide only a partial view of policy (actions and interventions) or refrain from specifying the specific contexts of these recommendations.
This makes it rather difficult to formulate recommendations that bear contextual validity as well as synthesize scholarly insights (Welter, 2011;Zahra, 2007). In other words, there is a major risk that many entrepreneurs, investors and other stakeholders in entrepreneurial initiatives and processes miss out on key scholarly insights, as a solid basis from which adequate strategies, policies and measures can be developed.
In this respect, evidence-based insights codified in terms of contextual conditions, key social mechanisms and outcome patterns can inform and support entrepreneurs and their stakeholders in the process of designing and developing new ventures. Although this article may not be read by many practicing entrepreneurs, its results -and future work using such an approach -are of direct relevance for those who want to take stock of the existing knowledge base with the aim to learn, educate and support evidence-based entrepreneurship. In that sense, the contextual conditions and social mechanisms identified (e.g., in our synthesis of the entrepreneurial opportunity literature) do not provide a universal blueprint but evidence-based insights that can easily be transformed into context-specific principles for action, as demonstrated in Table 5. For instance, the research synthesis conducted in this paper demonstrates legitimacy creation, cognitive lock-in, information and resource gathering as well as social obligations are key mechanisms explaining the highly diverse effects of social ties. Entrepreneurs who become aware of these mechanisms are likely to become more effective in social networking efforts, for example, by searching for variety, engaging in deliberate efforts to reshape their network structure, and so forth.
---
Limitations and Further Research
This paper presents a mechanism-based research synthesis approach that is applied to the literature on entrepreneurial opportunity formation, exploration and exploitation. We systematically collected the relevant papers on this topic using a list of journals, but both the article collection as well as the presentation of the synthesis were limited. A proper systematic review of the existing body of knowledge should start by collecting all research outputincluding working papers, books and monographs -and then explain how the number of documents was reduced according to clear and reproducible guidelines. Furthermore, in this paper we were only able to present a snippet of the synthesis and the assumptions of the studies (cf. Dimov, 2011). It is up to future work in this area to develop a full-fledged systematic database of research documents and research synthesis, including collecting insights from other relevant fields, and to do this exercise for other relevant topics in the entrepreneurship literature as well.
Moreover, we merely touched on the analysis of the dependency and redundancy of the social mechanisms identified. A formal and more detailed analysis of dependency, redundancy, counterfactuals and unobserved mechanisms (cf. Durand & Vaara, 2009) is a very promising route for further research, which may also serve to identify new mechanisms and areas of research.
Finally, future research will need to focus on systematically distinguishing different types of mechanisms -ranging from micro to macro. For instance, Hedström and Swedberg (1996) refer to situational, action-formation and transformational mechanisms; alternatively, Gross (2009) distinguishes individual-cognitive, individual-behavioral, and collectively enacted mechanisms.
Distinguishing these different types of mechanisms will serve to identify the social levels at which and contexts in which practitioners can intervene. Stevenson and Jarillo (1990: 21)
---
CONCLUSION
| 57,244 | 765 |
1dfe57e0c98f9b9ff17dea09592c5c88fe9eb6af | Reducing School Mobility | 2,013 | [
"JournalArticle"
] | Student turnover has many negative consequences for students and schools, and the high mobility rates of disadvantaged students may exacerbate inequality. Scholars have advised schools to reduce mobility by building and improving relationships with and among families, but such efforts are rarely tested rigorously. A cluster-randomized field experiment in 52 predominantly Hispanic elementary schools in San Antonio, TX, and Phoenix, AZ, tested whether student mobility in early elementary school was reduced through Families and Schools Together (FAST), an intervention that builds social capital among families, children, and schools. FAST failed to reduce mobility overall but substantially reduced the mobility of Black students, who were especially likely to change schools. Improved relationships among families help explain this finding. | Introduction
Moving to a different school is very common among children in the United States. Following a cohort of kindergarteners from 1998 to 2007, the U.S. Government Accountability Office (2010) reported that 31% changed schools once, 34% changed schools twice, 18% changed schools three times, and 13% changed schools four or more times before entering high school. Mobility tends to be highest in urban schools with disadvantaged populations, especially at the elementary school level, and the South and West regions (as opposed to the Northeast and Midwest) have the highest percentage of a substance abuse (Gasper et al., 2012;Grigg, 2012;Parke & Kanyango, 2012;Rumberger, Larson, Ream, & Palardy, 1999;Reynolds et al., 2009;Swanson & Schneider, 1999;Wood, Halfon, Scarlata, Newacheck, & Nessim, 1993). In a meta-analysis of 26 studies of school mobility, Mehana and Reynolds (2004) estimated a three to four month performance disadvantage in math and reading achievement for mobile students.
Beyond the impact on individuals, there are spillover effects in high-mobility schools, as student turnover affects not only movers but also the non-movers whose classrooms and schools are disrupted (Hanushek et al., 2004). About 11.5% of schools serving kindergarten through eighth grade have at least 10% of their students leave during the school year (United States Government Accountability Office, 2010). In these high-mobility schools, even nonmobile students exhibit lower levels of school attachment, weaker academic performance, and higher dropout rates (South, Haynie, & Bose, 2007). At the school level, high mobility promotes chaos, decreases teacher morale, and increases administrative burdens (Rumberger, 2003;Rumberger et al., 1999). At the classroom level, high student turnover frustrates teachers, compromises longterm planning, and leads teachers to develop a more generic teaching approach (Lash & Kirkpatrick, 1990). Instead of addressing individual student needs, teachers slow the pace of instruction and become more review-oriented (Kerbow, 1996). This means that after only a few years, students attending high-mobility schools are exposed to considerably less information than those attending schools with lower mobility rates.
These school-level mobility effects are not trivial. While they have the potential to harm all students, there is evidence that they are worse for poor and minority students, contributing to racial and socioeconomic gaps in achievement (Hanushek et al., 2004). Furthermore, school reform efforts usually assume that students will remain in a specific school long enough for reforms to take effect, but schools in need of reform often have the highest rates of student turnover (Kerbow, 1996). High rates of student mobility are so problematic that some schools have implemented programs to discourage families from moving, such as Chicago's "Staying Put" project (Kerbow et al., 2003).
---
Causal inference
Researchers warn of potentially spurious relationships between mobility and student outcomes, since the families most likely to move are often the most disadvantaged (Gasper et al., 2010(Gasper et al., , 2012)). In general, students of low socioeconomic status are more mobile than their more advantaged peers, Black and Hispanic students are more mobile than their White and Asian American peers, and students from single-parent or step-parent families are more mobile than those from traditional two-parent families (Alexander, Entwisle, & Dauber, 1996;Burkam, Lee, & Dwyer, 2009;Hanushek et al., 2004;Nelson, Simoni, & Adelman, 1996;Rumberger, 2003;Rumberger & Larson, 1998;United States Government Accountability Office, 2010). Although a causal interpretation of findings on mobility effects remains a challenge because of the many common factors associated with school moves and child outcomes, studies that attempt to disentangle the effects of these confounders consistently find student mobility to have negative consequences, both for the students who change schools and for high-mobility schools (Rumberger, 2003). The evidence is sufficient to warrant examination of why families change schools and how schools can address this issue.
---
Types of school mobility
It is important to distinguish among different types of school moves. Some types are more common than others, some are more likely to have negative consequences than others, and some potentially can be addressed by schools while others likely cannot. Researchers have made such distinctions along four dimensions: (1) whether a school move is accompanied by a residential move, (2) when the move occurs, (3) whether the move is voluntary, and (4) if it is voluntary, whether the move is dictated by a negative life event.
First, residential mobility is very common in the United States; 22% of the U.S. population moved between 2008 and 2009, and two-thirds of these moves occurred within the same county (U.S. Census Bureau, 2009). Residential and school mobility are closely linked. Approximately two-thirds of secondary school changes are associated with a residential move (Rumberger & Larson, 1998), and these moves may be more detrimental than simply changing schools. Residentially mobile adolescents have been found to have school-based friendships characterized by weaker academic performance and lower expectations, less school engagement, and higher rates of deviance (Haynie, South, & Bose, 2006a). They also tend to have higher rates of violent behavior, and among adolescent girls, a higher likelihood of attempted suicide (Haynie & South, 2005;Haynie, South, & Bose, 2006b). Not surprisingly, residential mobility is also associated with reduced achievement in elementary and middle school (Voight, Shinn, & Nation 2012). Although our data permit an examination of residential mobility, we include it only as a supplementary analysis, because residential mobility was not affected by our schoolbased intervention, and controlling for residential mobility did not alter our findings.
Second, the timing of school moves matters because moving during the academic year is more disruptive than moving during the summer (Hanushek et al., 2004). The student's age and grade-level also matter; moving during early elementary school is associated with worse outcomes than moves that occur later in the schooling process, especially when school changes are frequent (Burkam, Lee, & Dwyer, 2009). Our study cannot differentiate between academic year and summer moves, but we are able to examine mobility during the early elementary grades, a critical period in child development that few studies of school mobility have explored.
Third, scholars have distinguished between compulsory and non-compulsory school changes. While the majority of school mobility occurs for non-compulsory reasons, compulsory moves, such as the transition from elementary to middle school or from middle school to high school, affect all students and are built into the structure of schooling. These moves are generally less disruptive than non-compulsory moves because school systems are set up for these transitions and all grade-equivalent students experience them together, but they are not free of negative consequences (Grigg, 2012). Because we focus on grades 1-3, our study is not complicated by compulsory moves, so we focus on an effort to curtail noncompulsory (voluntary) school changes.
Finally, voluntary school changes can be subdivided into strategic and reactive moves. Strategic moves historically have been more prevalent among white or socioeconomically advantaged families and are based on a family's choice to seek out a higher-quality or better-fitting school (also known as "Tiebout" mobility, named after C. M. Tiebout). Reactive moves occur in response to negative events, are more common among minorities and disadvantaged families, and are the type of move most frequently associated with harmful consequences (Fantuzzo, LeBoeuf, Chen, Rouse, & Culhane, 2012;Hanushek et al., 2004;Warren-Sohlberg & Jason, 1992). Some reactive moves are school-related, such as those motivated by dissatisfaction with a school's social or academic climate, conflict with students or teachers, or disciplinary problems and expulsions (Kerbow 1996). Others are not motivated by school-related factors, but instead by negative life events such as family disruption, dissolution, or economic hardship. This distinction suggests that schools have the potential to curtail certain moves but are unlikely to influence others. It also explains why some school changes are associated with positive effects, yet (most) others are not. Our data do not allow us to differentiate between strategic and reactive moves, but prior research shows that reactive mobility is high in predominantly low-income and minority urban populations, so it is very likely that most -though certainly not all -mobility in our sample is reactive rather than strategic (Alexander et al.,1996;Fong et al., 2010;Hanushek et al., 2004;Kerbow, 1996). To the extent that a school-based intervention can reduce mobility, it is likely to be through deterring school-related reactive moves.
---
Heterogeneity in school mobility
School mobility rates differ according to the characteristics of schools, where those with the highest levels of mobility are also the most disadvantaged and tend to have larger proportions of minority and low-income students (Nelson et al., 1996). Again, this fits the profile of our sample of schools. Mobility rates also differ according to the characteristics of students. Differences in mobility along racial/ethnic lines have been studied extensively. Generally, Black and Hispanic students are more likely to change schools than White and Asian American students, due in part to greater economic disadvantage (Alexander et al., 1996). Blacks also tend to change schools more frequently than other race/ethnic groups, and frequent moves are associated with an increased risk of underachievement (Temple & Reynolds, 1999). Evidence that immigrant students and English Language Learners have above-average mobility rates is also troubling because mobility is associated with a longer time for achieving proficiency in English (Fong et al, 2010;Mitchell, Destino, & Karam, 1997;United States Government Accountability Office, 2010). Moreover, differences in Hispanic subpopulations leave open the possibility of heterogeneity in school mobility among Hispanics; Mexican Americans -who comprise the majority of our sample -display particularly high mobility rates (Ream, 2005).
Student characteristics and school characteristics also interact to affect mobility. School segregation research finds evidence of white flight from predominantly minority public schools (Clotfelter, 2001), evidence of segregation between Black and Hispanic students across the public and private sectors (Fairlie, 2002), and self-segregation of a variety of groups into charter schools (Garcia, 2008). Thus, it is important to examine differential mobility across racial/ethnic groups while keeping the racial composition of schools in mind.
The availability of school choice may play a role in differential mobility patterns as well. Recent data show that Blacks (24%) are more likely to enroll in chosen (as opposed to assigned) public schools than Hispanics (17%), Asian Americans (14%), or Whites (13%) (Grady, Bielick, & Aud, 2010). Presumably, students are more likely to exercise choice when their families are dissatisfied with their assigned school, or if a new school seems particularly promising. That Blacks have the highest rates of exercising school choice suggests that, compared to other race/ethnic groups, they are either more dissatisfied with their assigned schools, more sensitive to school-related factors, more heavily recruited by choice schools, or have greater access to choice schools in their communities. With only two research sites and limited information on which schools mobile students attend, we cannot fully address the role of choice, but we do examine the extent to which proximity to charter schools influences mobility in our sample. Thus, families change schools for a variety of reasons, including family or economic circumstances, aversion to certain groups of students, dissatisfaction or conflict with the school, or attraction to other schools, and these reasons are likely to vary according to the characteristics of students and their schools. This means that strategies to reduce mobility will be more or less effective across students and schools as well. Accordingly, it is important to examine heterogeneity, both in overall mobility rates and in the effects of mobility-reducing efforts, as we do in the following analyses.
---
School Mobility and Social Capital
Relations of trust between families and school personnel, or social capital, play an important role not only in explaining why school mobility can be detrimental, but also in identifying how schools can reduce mobility. Much research implicates social capital in the negative effects of changing schools; the disruption in relationships among students, school personnel, and parents that accompanies school moves helps explain why mobile students exhibit lower achievement (Coleman, 1988;Pribesh & Downey, 1999;Ream, 2005). However, the relationship between mobility and social capital is multidirectional; not only does mobility affect social capital, but social capital also affects mobility.
---
Reducing mobility
Studies of residential mobility provide evidence that social networks play an important role in encouraging families to stay. Both nuclear and extended family ties deter long-range residential mobility, especially for racial/ethnic minorities and families of low socioeconomic status (Dawkins, 2006;Spilimbergo & Ubeda, 2004). Social ties with others living nearby deter long-distance mobility as well (Kan, 2007). Coleman (1988) lamented the decline in these informal sources of social capital and highlighted the need for formal organizations to take their place. Accordingly, there is evidence that local institutions such as churches and businesses can serve a socially integrating function that deters residential mobility (Irwin, Blanchard, Tolbert, Nucci, & Lyson, 2004).
Schools are an obvious candidate to serve this purpose with regard to school mobility. Researchers have suggested several ways for schools to encourage families to stay, many of which relate to building social capital. By improving their social and academic climates and making an effort to boost students' and their families' sense of membership in the school community, schools can increase parent engagement (Rumberger & Larson, 1998). Schools can also make themselves more attractive to students and their parents by implementing programs that promote positive relationships with families (Kerbow, 1996;Kerbow et al., 2003;Rumberger, 2003;Rumberger et al., 1999;Fleming et al., 2001). Thus, by making efforts to improve the number and quality of social relations among students, parents, and school personnel, and providing a space in which these networks can develop and operate, schools can aid in the production of social capital and possibly reduce student mobility.
---
The intervention: Families and Schools Together (FAST)
Our study examines an intervention expected to reduce school mobility by enacting the recommendations listed above. Families and Schools Together (FAST) is an intensive 8week multi-family after-school program designed to empower parents, promote child resilience, and increase social capital -relations of trust and shared expectations -within and between families and among parents and school personnel. FAST is typically implemented in three stages: (1) active outreach to recruit and engage parents, (2) eight weeks of multi-family group meetings at the school, followed by (3) two years of monthly parent-led meetings (FASTWORKS). 2 The eight weekly sessions -which take place at the school -last approximately two and a half hours and follow a pre-set schedule, where about two-thirds of the activities center around building relationships between families and schools, and the remainder target within-family bonding (Kratochwill, McDonald, Levin, Bear-Tibbetts, & Demaray, 2004). During each session, these activities include: family communication and bonding games, parent-directed family meals, parent social support groups, between-family bonding activities, one-on-one child-directed play therapy, and opening and closing routines modeling family rituals (see the Appendix for a detailed description of each FAST activity). FAST activities are theoretically motivated, incorporating work from social ecological theory (Bronfenbrenner 1979), family systems theory and family therapy (Minuchin 1977), family stress theory (McCubbin, Sussman, & Patterson 1983), and research in the areas of community development and social capital (Coleman 1988;Dunst, Trivette, & Deal 1988;Putman 2000) in order to build social networks by strengthening bonds among families and schools (see Kratochwill et al., 2004 and www.familiesandschools.org for specific information about FAST activities and their theoretical framework). These research-based activities, adapted to be culturally and linguistically representative, are led by a trained team that includes at least one member of the school staff in addition to a combination of school parents and community professionals from local social service agencies.
The FAST intervention has been successfully replicated and implemented across diverse racial, ethnic, and social class groups in urban and rural settings within 45 states and internationally (McDonald, 2002;McDonald et al., 1997). Several recent randomized controlled trials, including one involving the sample studied here, demonstrate that FAST engages socially marginalized families with schools and school staff and improves the academic performance and social skills of participating children (Gamoran, Turley, Turner, & Fish, 2012;Kratochwill et al., 2004;Kratochwill et al., 2009;Layzer, Goodson, Bernstein, & Price, 2001;McDonald et al., 2006). Each of these RCTs had a different study focus and explored the impact of FAST on children's educational and behavioral outcomes for samples that differed by geographic region and race/ethnicity of participants (Supplementary Table S1 briefly summarizes these previous RCTs). Our study is unique in that it examines low-income predominantly Latino Southwestern communities, recruits all families rather than those of at-risk children, and it is the first to investigate effects of FAST on school mobility.
Although FAST was not explicitly designed to reduce school mobility, its proven ability to build and enhance social relationships among members of the school community directly addresses one of the most important mechanisms by which schools can reduce mobility. FAST activities work to strengthen relationships among three specific types of networks: within families, between families within the same school community, and between families and school personnel. By developing and improving these types of relationships -and doing so within the physical boundaries of the school -FAST decreases school-related anxiety for both children and parents, reduces barriers to parent engagement, makes the school a more welcoming environment for families, and fosters the creation of parent networks within schools, where resources and social support can be exchanged (Kratochwill et al., 2004;Kratochwill et al., 2009;Layzer, Goodson, Bernstein, & Price, 2001;McDonald et al., 2006). Thus, FAST is just the sort of social capital-building organization advocated by Coleman (1988) and others to reduce school mobility.
The research on social capital, school mobility, and FAST suggests that the intervention could reduce school mobility for three reasons. First, building relationships among families within a school should increase parents' sense of membership in the school community and reduce mobility. Second, FAST makes schools central to the social networks of parents, providing physical space where these networks develop and operate and where families exchange resources and social support. Changing schools would result in a loss of this source of social capital. Third, increasing families' familiarity with, and trust of, the school and school personnel by offering a new and informal context where parents can interact with school staff should reduce school moves driven by dissatisfaction, discomfort or distrust. Thus, even though reducing mobility is not an explicit goal of the FAST intervention, it is for these reasons that we expect students in schools assigned to the FAST program to be less likely to change schools between grades 1-3 than students in control schools. Moreover, we expect FAST to be particularly effective at reducing educationally motivated moves, such as those spurred by school dissatisfaction or feelings of isolation from the school community, which, as discussed above, may be more likely for Black families. Since motives for changing schools likely vary across students, we anticipate heterogeneity in the effects of FAST on school mobility across different types of students. Because our sample of schools is relatively homogeneous, we expect less variation in effects across schools.
---
Data and Measures
---
Sample recruitment and randomization
We use data drawn from the Children, Families and Schools (CFS) study, a clusterrandomized controlled trial targeting first grade students and their families in eligible elementary schools that agreed to randomization in Phoenix, Arizona, and San Antonio, Texas. 3 These cities and schools were selected because of their high proportions of Hispanic students and students eligible for the national school lunch program, and our sample reflects these characteristics. Fifty two elementary schools were randomly assigned to a treatment condition, with half selected to receive the intervention (26 FAST schools), and half selected to continue with business as usual (26 control schools). Randomization produced two comparable groups of schools with no statistically significant differences on pre-treatment demographics or academic performance characteristics.
Participant data were collected during the students' first-grade year (2008)(2009) for Cohort 1 and 2009-2010 for Cohort 2), with follow-ups at the end of Year 2 and a final survey in Year 3, when students were expected to be in third grade (2010-2011 for Cohort 1, and 2011-2012 for Cohort 2). 4 Just below 60% of first grade families consented to participate in the study, which limits the generalizability of our results to some extent, but since there were no statistically significant differences in the recruitment rates between FAST and control schools, our results should be unbiased. In FAST schools, 73% of families who consented attended at least one FAST session, and among those who attended at least one session, 33% "graduated" with a "full dose" of FAST, meaning that they began in week 1 or 2 and attended six or more of the eight sessions. On average, participants attended 35% of FAST sessions, and half the participants attended multiple sessions. Fortunately, we are not missing any data related to treatment assignment, randomization, school mobility, or school characteristics. Thus, our analytic sample includes all 3,091 students who consented to the study and the 52 schools they attended in first grade. We discuss additional covariates and our handling of missing student data below.
---
Outcome and key independent variables
The outcome is a binary indicator of whether a student was enrolled in a different school in third grade than s/he attended in first grade. School moves were identified using rosters provided by schools at the beginning of the first and third years and should be very accurate. Students retained in grade were also identified so as not to be incorrectly labeled as movers. The weakness of this measure is that we are unable to identify students who made multiple school moves, or who changed schools but returned to their original school between first and third grade. We conducted both an intent-to-treat (ITT) analysis, which estimates the average treatment effect for those in schools assigned to FAST, and a complier average causal effect (CACE) analysis, which estimates the average treatment effect for those who 3 Given the large number of schools participating in the study over the two sites, a staggered implementation was necessary. Two consecutive cohorts of first graders were each divided between three seasons (fall, winter, spring). Schools were selected to have at least 25% of students from low-income families and 25% of Hispanic origin. More details about the RCT design and implementation are available upon request. 4 Students retained in first or second grade were also included.
actually complied (i.e., who graduated from FAST by attending one of the first two sessions and at least six of the total eight sessions). The key independent variable in the ITT analysis is a school-level treatment indicator, and the key independent variable in the CACE analysis is an individual-level indicator of graduating from FAST.
---
Control variables
The randomization of FAST occurred within three districts in Phoenix and two randomization blocks in San Antonio, so estimating an unbiased average treatment effect requires controlling for these units of randomization.5 These controls were included at the school-level in our analyses. Additional controls can increase statistical power and correct for pre-treatment differences that may arise in spite of randomization. At the schoollevel, we included the size of the school, the proportion of students receiving subsidized lunch, the proportion of students identified as Hispanic, Black, White, Other (Asian or American Indian), English Language Learners, and the proportion of third-graders scoring proficient on state assessments in reading (all based on the 2008-2009 school year). Because school choice may play a role in school mobility, we also included measures of the number of charter elementary schools located within three miles of each school in Year 1 of the study, and the change in the number of such schools between Years 1 and 3.
At the student-level, we included each student's age, a log-transformed measure of travel time (in minutes) from home to school, indicators of the student's gender and race/ethnicity (Hispanic, Black, White, or Other), and indicators of whether the student was an English Language Learner, a recipient of special education services, or eligible for the national school lunch program. We also conducted supplementary analyses that incorporate information on participants' residential mobility, which we discuss at the end of the results section.
Since FAST is expected to reduce mobility by building social capital among families and between families and schools, several pre-treatment measures of parent-reported social capital were also included. These include parent reports of the number of school staff they felt comfortable approaching (staff contacts), the number of parents of their child's friends they knew (intergenerational closure; Coleman, 1988), the degree to which they agreed that they shared expectations for their child with other parents, whether they regularly discussed school with their child, and whether they regularly participated in school activities. Two additional scales were constructed from a battery of questions. The first is a parent-staff trust scale constructed from four items related to parents' perceived trust of school staff (α = 0.86). The second is a parent-parent involvement scale (α = 0.91) measuring how involved each parent was with other parents at the school, in terms of exchanging favors and social support. Together, these measures provide information on both the quantity and quality of relationships between families in the community, as well as between families and schools. More details on these social capital indicators and scale construction are provided in the Appendix.
---
Missing data
Supplementary Table S2 summarizes the raw student-level data, including the number of observations for each variable. We used multiple imputation procedures to impute missing data values for student-level covariates in order to maximize the use of available information and minimize bias (Royston, 2005;Rubin, 1987;von Hippel, 2009). 6 We created five imputed data sets using -ice-in Stata 12, analyzed each individually, and derived final estimates adjusted for variability between these datasets. 7
---
Method and Analysis
---
Intent-to-treat (ITT) analysis
Because the outcome is a dichotomous indicator of whether each student changed schools between Years 1 and 3, and treatment assignment occurred at the school-level, we used a two-level logistic regression approach, as described by Raudenbush and Bryk (2002). For the ITT analysis, the comparison is based on school assignment to the treatment versus control condition rather than actual receipt of the treatment, which varied among participants. It should be noted that the ITT effect encompasses the total average effect of treatment assignment, including any effects driven by participation in the FAST sessions, subsequent FASTWORKS meetings over the next two years, as well as any spillover effects to families who did not participate. The null model, shown in equation 1, partitions the variance in the log-odds of mobility into within-and between-school components. There is no within-school error term because logistic regression predicts probabilities rather than expected values, and the error is a function of these predicted probabilities. The betweenschool error term, u 0j , represents each school's deviation from the grand mean (γ 00 ) and is used to estimate between-school variability.
(1)
To estimate the unbiased ITT effect of FAST, we added the treatment indicator (FAST) along with controls for the units of randomization (RAND) to the second-level model, as shown in equation 2. 9
(2)
In further specifications, we added the pre-treatment student-level and school-level covariates listed above. Throughout, we used random-intercept models, which hold the effects of all student-level predictors fixed, meaning they do not vary across schools. To 6 Multiple imputation is the preferred method of handling missing data among many researchers, but our results are unlikely to depend on the particular strategy used. No students were missing the outcome variable or treatment indicator, there were low levels of missingness on imputed covariates, and findings were practically identical when we used listwise deletion. 7 Interactions and variable transformations were created prior to imputation. School fixed effects were included in imputation models to address the multilevel nature of the data. Analyses include indicators for students missing pre-test or demographic variables. 9 There was no evidence of Cohort and Season of implementation effects or interactions.
examine heterogeneous effects of FAST on mobility, we also included cross-level interactions of the treatment with selected student-level covariates, including race/ethnicity, gender, travel time to school, survey language, free/reduced lunch status, English Language Learner status, and special education status. These cross-level interactions permit nonrandomly varying slopes for studentlevel predictors. 8 Similarly, we examined schoollevel interactions between FAST and school characteristics, although with only 52 fairly homogeneous schools, our study is underpowered to detect school-level interactions.
---
Complier average causal effect (CACE) analysis
If FAST affects school mobility, this should be especially true for students who comply with their treatment assignment and actually attend the FAST sessions, which are the core of the intervention. However, since compliance cannot be randomly assigned, quasi-experimental methods are required to estimate the effect for compliers. Families in the treatment group who attended the sessions are likely to be less prone to move than families in the treatment group who did not attend the sessions. To account for selection bias, we must compare the compliers from the treatment group to those in the control group who would have complied, had they been offered the treatment.
Our approach views compliers as a latent class of individuals that is observed for the treatment group but unobserved for the control group. By using observed data on the compliance of the treatment group and observed pre-treatment predictors of compliance for all participants, we are able to identify members of the control group who would have been most likely to comply if they had been given the opportunity. We examined several specifications of the compliance model and present the one that best distinguishes compliers and non-compliers. The compliance model was estimated simultaneously with a multilevel model predicting school mobility, similar to those used in the ITT analysis. This model assumes that FAST affected only those who complied with the treatment, and it estimates the complier average causal effect (Muthén & Muthén 2010). 9 We provide more information in the results section, and further details are available from the authors upon request.
---
Results
Table 1 summarizes school-level descriptive statistics by treatment and shows that there were no statistically significant differences in school characteristics across the two conditions, as expected under random assignment. Post-imputation student-level descriptive statistics, by treatment, are summarized in Table 2. The students in our sample reflect the demographic composition of the schools. About 15% of the sample was White, just over 70% was Hispanic, and nearly 10% was Black, while less than 5% made up a combination of other race/ethnic groups. We did find statistically significant differences between FAST and control schools at the individual level for some covariates. Students in FAST schools lived farther away from their schools and were more disadvantaged on most pre-treatment measures of social capital (the lone exception was parent-staff trust, which favored the FAST group). It is unclear whether these differences were due to chance, differential selection to participate in our study, or an effect of treatment assignment on survey responses relating to social capital. In any case, it is important to consider these differences and account for them in our analyses. Although this study does not focus on FAST effects on social capital per se, we note that FAST did significantly boost social capital between the beginning and end of first grade (Supplementary Table S2; Gamoran et al. 2012).
---
Intent-to-treat (ITT) results
The results of our ITT analysis are summarized in Table 3. Coefficients are presented on the logit scale, so positive coefficients correspond to a higher likelihood of changing schools, and negative coefficients correspond to a lower likelihood of changing schools. A full table with standard errors is included in Supplementary Table S3. The null model (Column 1) estimates a between-school variance of .156. The latent intraclass correlation (which uses π 2 /3 as the within-group variance in multilevel logit models) is .047; in other words, less than 5% of the variance in the probability of changing schools occurred between schools. This model also estimates the typical student in the typical school's probability of changing schools to be .380. This is roughly equal to the proportion of students in our sample making a school change and is comparable to prior studies of student mobility in early elementary school. Thus, overall levels of school mobility were quite high in our sample but consistent with prior studies, and there was not much variability in mobility among schools.
Column 2 shows the unbiased average ITT effect of FAST on mobility, which is small, positive, and non-significant. According to this model, the predicted probability of the average student in a FAST school changing schools was about .39, compared to .37 for students in control schools. There were no statistically significant differences in mobility across the units of randomization, and further analyses found no evidence of heterogeneous FAST effects (interactions) across districts or randomization blocks. In short, the findings suggest that on average, attending a school assigned to FAST did not reduce school mobility.
Earlier we reported some pre-treatment differences in student characteristics between treatment conditions. Specifically, students in FAST schools tended to report lower levels of social capital prior to treatment and to live farther from school than students in Control schools. Column 3 shows the estimates after controlling for pre-treatment student background and social capital variables. The FAST effect is even smaller and continues to be indistinguishable from zero, further suggesting that there was no main effect of FAST on school mobility. Not surprisingly, students who lived farther from their school were more likely to change schools, and mobility was lower for students whose parents knew more of their friends' parents at the beginning of first grade, suggesting that more intergenerational closure was associated with less school mobility. Together, these findings imply that pretreatment differences in social capital and distance to school do not substantially bias FAST effects, but if anything, the bias is upward, making mobility in FAST schools appear higher than it should be.
There were also differences in mobility by race/ethnicity and subsidized lunch status. School mobility was higher among Black and White students than Hispanic students, and it was higher among students who qualified for free or reduced-price lunch than those who did not. The corresponding predicted probabilities of changing schools were .35 for Hispanic students, .46 for Black students, .43 for White students, and .39 for students in the Other category. Students qualifying for free or reduced-price lunch had a .39 predicted probability of making a school change, compared to .31 for students who did not qualify for subsidized lunch.
The FAST effect estimates do not change after accounting for pre-treatment school characteristics (Column 4), which is not surprising considering there were no school-level differences in observed characteristics across treatment conditions. Mobility was significantly higher in larger schools and may have been higher in schools with more Whites and more students qualifying for free/reduced-price lunch. It also appears that mobility was higher in schools with more charter schools located nearby at the beginning of the study, but lower for schools that experienced a growth in nearby charter schools. We found no evidence of treatment effect interactions with any of these school characteristics (Supplementary Table S3).
Column 5 examines interactions of FAST with selected student characteristics. The significant negative interaction with time to school suggests that although living farther from school was associated with higher mobility, this association was significantly weaker in FAST schools. There are no significant interactions with survey language, gender, English Language Learner status, or free or reduced-price lunch status, suggesting that FAST was equally ineffective in reducing school mobility for these groups of students in our sample. Of the interactions with race/ethnicity, there is a negative and statistically significant interaction for Blacks, and smaller negative interactions for Whites and Others that do not reach statistical significance. Figure 1 translates these estimates into predicted probabilities. The substantial effect of FAST on school mobility for Black students is particularly striking considering their high mobility rates. Net of all other covariates, Black students in control schools were more likely to move (.53) than not, but in FAST schools their probability of moving was much lower (.38), bringing them to par with other non-Hispanics and nearly equal to Hispanics, who had the lowest school mobility rates in our sample.
Exploring the FAST effect on Black mobility-The significant reduction of mobility for Black students warrants further exploration, so we took several measures to examine the robustness of this finding and found convincing evidence that FAST reduced school mobility for Black students. First, we examined pre-treatment descriptive statistics by treatment for the black subsample. There were no statistically significant differences in school characteristics (averages weighted by Black enrollment), although Black students in FAST attended schools with fewer Hispanics and more charter schools nearby (Supplementary Table S4). There were also no statistically significant differences in student characteristics by treatment among Blacks, and the patterns of these differences were similar to those reported for the overall sample (Supplementary Table S5). Second, we added interactions of FAST with other pre-treatment variables, such as indicators of social capital, the racial composition of the school, and the proximity to charter schools (available upon request). Though Blacks were more likely to move from predominantly Hispanic schools and when there were more charter schools nearby, these interactions were not statistically significant, and allowing for them did not explain the Black-by-FAST interaction. Another threat to validity is that in logistic regressions, coefficients are scaled relative to the variance of the error term, so this interaction could potentially be an artifact of differences in unobserved factors related to mobility among Blacks (Allison, 1999), but models that estimated a unique variance for Blacks found no significant evidence of this unobserved heterogeneity, and allowing for it did not alter our findings. Thus, we are convinced that Black families in this sample were indeed very likely to change schools and that FAST substantially reduced their propensity to move.
We suggest two plausible explanations for the particularly high levels of Black mobility and the FAST effect reducing Black mobility. First, suppose Black families were more likely to change schools out of dissatisfaction related to poor relationships with schools, and FAST improved these relationships. The variable most relevant to this explanation is the "parentstaff trust" scale measured in the post-treatment parent survey. Second, suppose Black families were more likely to change schools because they felt isolated from other families in the school community, but FAST helped these families build relationships with others at the school. The variable most relevant to this explanation is post-treatment intergenerational closure. We tested these explanations among families completing a post-treatment survey (roughly two thirds of our sample) by adding each of these social capital measures, as well as its interactions with FAST, Black, and three-way interactions with FAST and Black, to simplified models using a package designed to test mediation in logistic regressions (Kohler, Karlson, & Holm, 2011). 10 The results of this analysis are presented in Table 4. The reduced model shows the FAST and Black main effects and the Black-by-FAST interaction without allowing them to be correlated with the mediators (in gray), and the full model shows the extent to which these effects are mediated when they are allowed to be correlated. The final two columns show the percent of the Black main effect and Black-by-FAST interaction that are explained by each mediator. The results suggest that the Black-by-FAST interaction is almost totally explained by the three-way interaction of Black, FAST, and intergenerational closure. Thus, it was not simply that FAST boosted intergenerational closure, or that intergenerational closure had a larger impact on Black mobility, but that the intergenerational closure promoted by FAST had a particularly strong impact on reducing Black mobility.
It should be stressed that this analysis is non-experimental in that mediators are not randomized, so we treat this as an exploratory procedure. Nonetheless, the findings fit a story in which Black families were socially isolated from other parents in these schools, but FAST helped bring them into parental networks and reduced their propensity to change schools.
10 Testing mediators in logistic regression analyses requires special techniques. Logistic regression coefficients are scaled relative to the unobserved variance in the outcome, which changes when covariates are added to a model. The solution is to explain the same amount of variability across all models so that changes in coefficients are due solely to their relationships with mediators (Kohler, Karlson, & Holm, 2011). We use the -khbprogram in Stata to conduct our mediation analyses.
---
Complier average causal effect (CACE) results
The CACE analysis focuses on the effects of FAST for those families who actually complied with the treatment assignment and graduated from FAST. Given the striking findings presented above, we estimated compliance models for the full sample and for the subsample of Blacks. 11 Table 5A shows the estimates of the preferred compliance models, which classify 25% of the full sample and 16% of the Black sample as compliers or wouldbe compliers. 12 The results from the CACE analysis, shown in Table 5B, are practically identical to those provided by the ITT analysis. FAST had no overall effect for compliers; the predicted probability of changing schools for compliers in FAST schools was .41, compared to .40 for would-be compliers in control schools. Echoing earlier findings, mobility was higher among Blacks and Whites than among Hispanics, and lower for those with higher initial levels of intergenerational closure. Because the finding of reduced mobility for Blacks in the FAST group was so intriguing, we also estimated a CACE model on the subsample of Blacks in the study. The results indicate a huge FAST effect on reducing school mobility for Black compliers. While the predicted probability of changing schools for Black compliers in FAST schools was .43, it was almost 1.0 in control schools, as practically all Black would-be compliers in these schools moved. 13 Other estimates suggest potential reasons for this effect. For Blacks, pre-treatment parent-parent social capital measures were especially important predictors of reduced mobility. In particular, both higher levels of shared expectations with other parents and intergenerational closure were significantly associated with lower probabilities of school mobility. This further supports the hypothesis that increased parent-parent social capital played an important role in lowering the mobility of Black students.
---
Supplementary analyses: Residential mobility
Given the close relationship between school mobility and residential mobility, we conducted supplementary analyses incorporating data on participants' residential moves. ITT models similar to those presented above provided no evidence of FAST effects on residential mobility overall, or for any subgroup of students. We also found that controlling for residential mobility did not alter the FAST effect on school mobility, and there was no evidence of an interaction to suggest FAST effects on school mobility differed between families who did or did not move residences.
11 This model excludes the student-level covariates that were irrelevant to the Black sample (the race dummies and language variables), as well as several school-level covariates because of the lower statistical power resulting from a smaller sample. The findings hold when the school-level variables are included, but standard errors are larger. 12 We examined the robustness of our findings to the specification of compliance. Our preferred model defines compliance in terms of the FAST program's official definition of "graduation" as attending at least six of the eight weekly sessions. Using lower cut-offs such as two or four sessions yields higher compliance rates and produces qualitatively similar but less precise results than those presented here. 13 Although the especially high rates of mobility for black would-be compliers seem odd, they were robust across different specifications of both the compliance and outcome models. It is also important to keep in mind that there were only about 15 Black compliers in each treatment condition, so 14 or 15 of them moving is not implausible given the high mobility of Black students.
---
Discussion
This study provides a rare experimental evaluation of a social capital-building intervention hypothesized to reduce student mobility in early elementary school, a significant period when moving is particularly harmful. School mobility rates were high in our sample but consistent with previously published reports; the probability of a first-grader changing schools by third grade was nearly 40%. FAST was expected to reduce mobility due to program components that build and improve relationships between families and among families and schools. This social capital was theorized to improve families' perceptions of the school's commitment or effectiveness and increase families' identification with the school community, making them less likely to leave.
For the majority of students in our sample of predominantly low-income Hispanic schools, FAST had no effect on mobility. There was evidence, however, of heterogeneity in treatment effects. First, Black students had especially high rates of school mobility, but FAST reduced their probability of changing schools between first and third grade by 29 percent. This effect held up across a variety of robustness checks, and it was even larger for those who complied with the treatment and graduated from FAST. Given recent reports that Blacks are more likely to exercise school choice than other groups (Grady et al., 2010), it is possible that FAST helped reduce school dissatisfaction among Black families in our sample by building social capital between families and schools. It is also possible that Black mobility was high because Black families felt socially disconnected from families in these predominantly Hispanic schools, but FAST aided in their integration into these communities. Our evidence, though tentative given the non-experimental nature of the mediation analysis, favors the second explanation. The intergenerational closure promoted by FAST was particularly beneficial for Black families in terms of reducing mobility, and the CACE analyses offered further evidence that parent-parent relationships were an especially important deterrent to mobility for Blacks.
Second, although students who lived farther from their schools were considerably more likely to change schools than others, this association was significantly weaker in FAST schools. It is plausible that children who lived farther away from school were more mobile because their families were less connected with the community of families at their child's school, but FAST helped incorporate them into school networks and communities, making them less likely to move. Unfortunately, further analyses were unsuccessful in supporting this hypothesis or any other social capital-related explanation of this finding.
The heterogeneity in mobility rates among race/ethnic groups is also worth revisiting. The high rates of mobility among Blacks in this sample align with prior findings, but the high rates of White mobility and the lower rates of Hispanic mobility are atypical. Similar trends have been documented in predominantly minority schools (Nelson et al., 1996) and could be related to the schools' racial composition. Whites may exhibit higher levels of school mobility in predominantly Hispanic areas due to White flight or the types of strategic moves documented elsewhere (Hanushek et al., 2004). When viewed alongside the high mobility of all non-Hispanics in this study, another explanation is that non-Hispanic students and their families feel out of place in predominantly Hispanic schools. However, we found minimal variation in white mobility rates across schools, so our data provide no evidence on such speculation.
The experimental design of our study supports the causal claim that FAST reduced mobility for Black families in our sample. This is an important finding given the role of high Black mobility in the persistence of racial achievement gaps (Hanushek et al., 2004), and the accompanying long-term consequences of these achievement gaps for Black students' later schooling, occupational, and labor market outcomes compared to their White and Asian counterparts (Jencks & Phillips, 1998;Magnuson & Waldfogel 2008). Whether parentparent social capital is the true causal mediator of this effect is less certain. Because the mediators we tested were not randomly assigned, unobserved factors that affect both the mediator and school mobility could lead to bias. There is no way to rule this out, but our results did hold after controlling for pre-treatment measures of our mediators. Ultimately, intensive qualitative research may be required to uncover the reasons families change schools and to understand why programs like FAST have heterogeneous effects.
While our results are illuminating, there are important limitations. Data constraints prevent us from drawing stronger inferences about why FAST decreased school mobility for Blacks or why it failed to reduce mobility for other students. Our sample is not representative of schools nationally, and the 60% of consenting families may not be representative of all families in these schools, so we encourage future research to examine the generalizability of these findings to other contexts; if similar school-based programs can promote social capital among parents within a school community and reduce mobility, this could benefit many students and schools. It may be important to examine the timing of school moves as well; convincing families to delay school changes until the summer could be beneficial, but we are unable to identify the timing of moves in our data. On a related note, it is unclear whether these findings would hold if we examined mobility over a longer period of time. FAST may have simply delayed the mobility of Black students, a short-term effect that could fade over time. Conversely, FAST could reduce mobility beyond third grade if the effects of social capital accumulate over time. We are also unable to differentiate between the two types of non-compulsory moves -strategic and reactive -as we do not know why children changed schools. School moves may have been beneficial for some students and harmful to others, but the high rates of mobility in our sample were almost certainly disruptive to schools. Understanding the different motivations behind school moves is a next step toward understanding how schools or policymakers could address student turnover.
To conclude, school mobility is an important outcome to be studied in its own right, and very little published research has examined efforts to curtail it or mitigate its negative consequences (Alexander et al., 1996;Kerbow et al., 2003;Nelson et al., 1996). Our study provides rigorous evidence that building relationships between and among families and schools may significantly reduce mobility for Black students in predominantly Hispanic schools. It is possible that these types of interventions also reduce mobility for other groups of students in schools with different racial/ethnic compositions, which we encourage future work to explore. We also encourage researchers studying the effects of educational programs and reforms to examine their impact on school mobility, as this is only the second experimental study to test ways in which schools can reduce student turnover (Fleming et al., 2001). Finally, we urge researchers to move beyond simply exploring the effects of mobility and to examine its causes as well as potential ways to prevent unnecessary and harmful moves or to mitigate their negative consequences. Social capital theory may be a critical element in these pursuits.
Feeling Charades (15 min): Parents and children engage in experiential learning by acting out feelings while other members of the family attempt to guess the emotion. The parent is in charge of ensuring turn-taking and facilitates talking about the game.
---
Kid's Time (1 hour):
FAST staff engages children with each other in supervised developmentally-appropriate activities while their parents participate in Parent time.
Parent time (1 hour): Parents from the same school connect with one another through oneon-one adult conversation ("buddy time") followed by larger-group parent discussions ("parent group") led by a FAST facilitator. Parents direct the topics of conversation, share their own issues, and offer help to each other, building informal social support networks over time and facilitating the development of intergenerational closure.
Special Play (15 min): Parent and child engage in child-directed one-on-one play. The parent is coached to follow the child's lead and not to teach, direct or judge the child in any way. FAST team members do not engage with children but support parents through discrete coaching.
Lottery: Each week, one family wins a basket filled with prizes specifically chosen for that family and valuing up to $50. The winning family is showcased during closing circle. Each family is guaranteed to win once, a secret known by parents but not children, and the winner serves as the host family for the next week's meal. This creates a tradition that is valued, respected and repeated each week among participants.
Closing circle and rain: At the conclusion of every FAST Night, parents and team members create a circle and share announcements with each other. Rain is a game played with no talking and involves turn-taking and close attention. The families' status as a community is visually and actively reinforced through this activity.
Family graduation: At the last weekly session, families attend a graduation ceremony to commemorate their completion of the program. FAST team members write affirmations to parents. This is a special event; for example, families might dress up, receive diplomas, wear graduation hats, and take photographs. Each family is announced in front of the group, and school representatives (such as the school's principal and the child's teacher) are invited to observe or participate in the graduation ceremony.
---
FAST Team members:
The FAST program is run by a trained and collaborative team of individuals that reflect the social ecology of the child (e.g. family, school, community). The team must include a parent from the child's school, a school representative (often a counselor, social worker, librarian, or teacher), and two members from local community service agencies. FAST teams are required to be representative of the racial, cultural and linguistic diversity of the families that will be participating in the program, which enables teams to communicate respectfully and appropriately with program participants.
---
Social Capital Scales and Items
Parent-Parent Social Capital (from Parent Surveys) 1. Parent-parent involvement (α=0.91) 6 items with 4 categories each ("None", "A little", "Some", "A Lot"); items averaged and standardized for those answering at least 4 of 6 items. "How much do other parents at this school…"
---
a)
Help you with babysitting, shopping, etc.?
---
b)
Listen to you talk about your problems?
---
c)
Invite you to social activities?
"How much do you…"
---
d)
Help other parents with babysitting, shopping, etc.?
---
e)
Listen to other parents talk about their problems?
---
f)
Invite other parents to social activities?
2. Intergenerational closure: 7 categories (0, 1, 2, 3, 4, 5, 6 or more)
a. "At this school, how many of your child's friends do you know?"
3. Shared expectations with parents: 4 categories ("None", "A little", "Some", "A Lot") a. "How much do other parents at this school share your expectations for your child?"
Parent-School Social Capital (from Parent Surveys)
1. Parent-staff trust (α=0.86) 4 items, each with 4 categories ("None", "A little", "Some", "A Lot"); items were averaged and standardized for those answering at least 3 of 4 items. 3. School participation 5 categories ("Strongly disagree", "Somewhat disagree", "Neither agree nor disagree", "Somewhat agree", "Strongly agree").
a. "I regularly participate in activities at my child's school."
---
4.
Talk to child about school 5 categories ("Strongly disagree", "Somewhat disagree", "Neither agree nor disagree", "Somewhat agree", "Strongly agree").
a. "I regularly talk to my child about his or her school activities." Full table with standard errors and school interactions provided in Supplement.
---
School Mobility by Treatment and Race/Ethnicity
---
Table 4
Mediators of Black-by-FAST Interaction Estimates combined across 5 imputations.
Note:
* p<.05.
25% of full sample and 16% of black sample classified as compliers.
Estimates combined across 5 imputations.
---
Supplementary Material
Refer to Web version on PubMed Central for supplementary material.
---
Appendix
Description of FAST Activities (Kratochwill et al., 2004;McDonald, 2008) Family Flag and Family Hellos: At the first FAST Night, each family creates a small flag to place on their family table. The parent is in charge of the process and each family member contributes to the making of the flag which symbolizes the family unit. Each week these flags are used to denote the family's table where they eat the family meal and participate in activities.
Family Music (15 min): Families sing the FAST song and are invited to share and teach each other additional songs, such as the school song.
Family Meal (30 min): Each family shares a meal together at their table. Staff and children help serve parents first, showing respect to the parent and demonstrating reciprocity and turn-taking. Each week the main dish is planned and prepared by a different host family. The host family is thanked openly by all participating families at the end of the night. The family who won the lottery the previous week serves as the host family the following week and receives money and support needed to provide the meal.
Scribbles (15 min): This is a family drawing and talking game where each person creates a drawing then family members ask questions about what others drew and imagined. The parent is in charge of enforcing the turn-taking structure and ensuring positive feedback. | 63,021 | 845 |
bbbb49d1a832e45ff69063e4fff41f6731f06c77 | The Casino Syndrome: Analysing the Detrimental Impact of AI-Driven Globalization on Human & Cultural Consciousness and its Effect on Social Disadvantages | 2,023 | [
"JournalArticle"
] | The paper aims to study the detrimental impact of Artificial Intelligence on human life and human consciousness. AI's harmful impact can be described according to the tenets of the 'Casino Syndrome', which was first laid down by Anand Teltumbde in his seminal work 'The Persistence of Caste: The Khirlanji Murders and India's Hidden Apartheid ' (2010). Taking from the addictive and commercial components of Teltumbde's concept, the researchers have attempted to redefine the concept in the context of AI and its detrimental impact on human life. According to the three tenets, researchers have attempted to prove that AI can pitch an individual against all others in the marketplace, leading to unemployment and creating conflicts at local, national and international levels as it creates an 'elitist' agenda which culminates in a 'rat race' and competition. It can disintegrate interpersonal relationships at home, in society and culture and in the workplace due to its extreme focus on individualism thanks to content curation and customized algorithms, and in many other ways, lastly, as a result of the first two, it can also lead to several psychological and mental health problems. The paper explores numerous methods towards creating accountability and inclusivity in AI and the Globalized world and creating resilience against the 'Casino Syndrome' through methods involving ethical considerations, transparency, mitigation of prejudices, accountability, education, etc.. Ultimately, this paper does not deny the obvious benefits of AI, but it highlights the possible negative consequences of uncontrolled and unscrutinised use of it, which has already begun. | I. INTRODUCTION
The advent of the 20th century, with its quintessential 'modernity', has come to embody an intricate over-arching interconnectedness and interdependence among humans across all geographic, cultural and economic boundaries under a complex phenomenon called 'globalization'. Globalization, often deemed to have its roots in as early as the 15th century, with 'The Silk Road' serving as a route for international trade, further bolstered by the age of exploration (15th-17th century), and the Industrial Revolution (18th-19th century), wasn't conceptualized till the late 20th-century. It was in 1964, that the Canadian cultural critic Marshall McLuhan posited the foundational becoming of a technologically based "global village," effectuated by social "acceleration at all levels of human organization" (103), and in 1983, that the German-born American economist Theodore Levitt coined the term globalization in his article titled "The Globalization of Markets" (Volle, Hall, 2023).
Ever since the technological dominance of the late 20th and early 21st century, reflected in the wide accessibility of the internet, the prevalence of social media, satellite television and cable networks, the world has consolidated itself into a global network, iterating McLuhan's conception of 'one global village', so much so that in the contemporary times, the technological revolution has accelerated the process of globalization (Kissinger, 2015). This prevalence has given rise to a novel phenomenon termed the
---
Khan and Aazka
The Casino Syndrome: Analysing the Detrimental Impact of AI-Driven Globalization on Human & Cultural Consciousness and its Effect on Social Disadvantages IJELS-2023, 8 (6), (ISSN: 2456-7620) (Int. J of Eng. Lit. and Soc. Sci.) https://dx.doi.org/10.22161/ijels. 86.31 199 'Technosphere'. Credited to Arjun Appadurai, who considered technological globalization as one of the five spheres of globalization, technosphere implies a "global configuration" of boundaries, fostered by the flow and speed of technology (34). Thus, it can be found that technology and its manifested high-paced connectivity is indeed shouldering the cause of globalization.
One of the paramount testimonies of technology driving globalization happens to be the introduction and proliferation of 'Artificial Intelligence', commonly referred to as AI. Gaining prominence and consequent advancement ever since the development of digital computers in 1940, AI refers to "the ability of a digital computer or computercontrolled robot to perform tasks commonly associated with intelligent beings" (Copeland, 2023). In other words, AI is a branch of computer science that aims to create systems capable of performing tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and problem-solving, by using algorithms, data, and computational power to simulate human-like intelligence in machines.
Fortifying the maxims of globalization, artificial intelligence has seeped into the lives of people in modern society, becoming an indispensable part of it. Right from facilitating cross-cultural interactions by providing realtime language translation services to connecting employees located in different parts of the globe on platforms like Google Meet and Zoom, it can be affirmed that "Artificial intelligence, quantum computing, robotics, and advanced telecommunications have manifested the impact of globalization, making the world a global village" (Shah, Khan, 2023). Consequently, it also validates Theodore Levitt, the harbinger of theorizing globalization, who prophesied that "Computer-aided design and manufacturing (CAD/CAM), combined with robotics, will create a new equipment and process technology (EPT) that will make small plants located close to their markets as efficient as large ones located distantly (09).
Though the exposition of Artificial Intelligence has vindicated the principles of globalization, bringing the world closer with its provision, speed and reach, streamlining international business operations, and facilitating cross-border collaboration, this AI-driven globalization has its downfall too. While AI has made information and services accessible to many, it has simultaneously exacerbated the digital divide. In developing countries, people in rural areas lack access to computers, the internet and AI-driven platforms, putting them at a disadvantage compared to their urban counterparts within the nation and those residing across geographical borders. In lieu, those who possess the skills to develop and operate AI technologies often command high-paying jobs, while others face job displacement due to automation. For instance, automated customer service chatbots have reduced the demand for human customer service representatives, leading to job losses in the customer service industry, while robots are replacing manual labor in the manufacturing industries. Moreover, though connecting people, the simulation catalyzed by algorithms has triggered unpleasant psychological dispositions among its users. In essence, AIdriven globalization has created "complex relationships among money flows, political possibilities, and the availability of both un-and highly skilled labor" (Appadurai, 1998, p.34), all of which, with the unraveling of the digital divide, risks of unemployment for the unprivileged poor, and consequent mental dispositions only pins individuals against one another, and vests unrestrained power in the hands of the capitalists few, effectuating a disintegration of society at varied levels.
The aforementioned underside of AI-driven globalization aligns with a phenomenon called 'The Casino Syndrome', coined by Anand Teltumbde in his seminal work, The Persistence of Caste, wherein he investigates the nexus between globalization and the caste system in India. Contextualizing the simulating nature of the casino, whereby everyone involved in the play is merely guided by their zeal for money-making, becoming indifferent towards others, potentially yielding to the concentration of money in the hands of a few, broken relationships and mental health problems, he holds globalization to be operating along the same divisive lines. Similarly, since Artificial Intelligence stands as the modern-day face of globalization, the same 'casino syndrome' can be applied to AI-driven globalization.
To pursue this nexus, this paper intends to theorize Teltumbde's Casino syndrome and substantiate AI-driven globalization as the testimony of the tenets of the syndrome, by investigating its triggers of social transformation that furthers class divide, alters mental health and leads to the eventual disintegration of society. Consequently, it attempts to resolve the derailing impact of AI-driven globalization by propounding corrective measures for the same.
---
II. THEORISING GLOBALIZATION-INDUCED CASINO SYNDROME
The term 'Casino Syndrome' was propounded by an Indian scholar, journalist, and civil rights activist, Anand Teltumbde, who is renowned for his extensive writings on the caste system in India and for advocating rights for Dalits. One of his critical writings is The Persistence of Caste: The Khairlanji Murders and India's Hidden Apartheid (2010), wherein he analyzes and interrogates the
---
Khan and Aazka
The Khirlanji Murders, or the public massacre of four scheduled caste citizens in the Indian village called Kherlanji, substantiating it within the larger Indian political context that has failed to protect its downtrodden citizens and the socio-religious context that has aggravated the marginalization of these groups. A novel perspective that he foregrounds is the critique of globalization, deconstructing it merely as a myth that furthers the subjugation of Dalits and those who lay at the fringes of society, in the reasoning of which he likens globalization to the 'Casino Syndrome'.
Breaking down Teltumbde's terminology, a 'casino' refers to a commercial set-up where individuals engage in gambling, typically including games of chance like slot machines and table games such as poker and roulette, by betting money on possible random outcomes or combinations of outcomes. Initially physical, in the wake of digitalisation and globalization, online casinos like Spin Casino, Royal Panda, Genesis, Mr. Vegas, etc., have taken over.
Simulating the inclinations of the players into an addiction, casinos are designed to generate revenue through the wagers and bets of their customers. Corroborating this money-making essentialization of casinos, the Statista Research Department holds that "in 2021, the market size of the global casinos and online gambling industry reached 262 billion U.S. dollars" ("Global casino and online gambling industry data 2021", 2022), whereas "11% of adult internet users gamble actively online, generating a global revenue of over 119 billion GBP" (Iamandi, 2023).
Online casinos, affirming the technology that spawned globalization, which seemingly brings the world together, thus denote its capitalistic attribute, which not only hooks the people to its system but also ensures that the flow of money gets concentrated in the hands of its privileged owners. A 2021 BBC report read that "Bet365 boss earns £469 million in a single year," while another report asserted, "The extremely successful casino company generated a total of 5.16 billion U.S. dollars in 2020" ("Leading selected casino companies by revenue 2020", 2022).
Whereas, for the users, though casinos offer entertainment and the possibility of winning money, it can lead to addiction, selfishness, financial problems, debt, social and familial isolation, and so on. These culminations bring to the fore casino's correlation in the terminology,'syndrome', which refers to a "group of signs and symptoms that occur together and characterize a particular abnormality or condition" ("Syndrome Definition & Meaning"). The symptoms rooted in casino-induced simulation, often referred to as 'problem gambling', 'compulsive gambling', 'gambling disorder', and the like, are enlisted by the Mayo Clinic as preoccupation with gambling, restlessness, agitation, disposition to get more money by betting more, bankruptcy, broken relationships, etc.
Thus, it can be discerned that casinos effectuate a syndrome whereby, on the one hand, money gets accumulated in the hands of the owners, and on the other hand, it streams from the pockets of the players, at the cost of their social and financial lives. This is iterated by a research finding that holds that "a typical player spends approximately $110 equivalent across a median of 6 bets in a single day, although heavily involved bettors spend approximately $100,000 equivalent over a median of 644 bets across 35 days" (Scholten et al., 2020). Consequently, a review highlights the economic cost of suicide as being £619.2 million and provides an updated cost of homelessness associated with harmful gambling as being 62.8 million ("Gambling-related harms: evidence review", 2021). Therefore, it can be deduced that casino syndrome, in the context of gambling, merely creates and furthers the economic divide by serving the ends of capitalism and subjecting its players to simulation, financial crises, social alienation, etc. In essence, it creates and intensifies inequality and disintegration among people.
Foregrounding this penetrative inequality and associated disparity, Teltumbde speaks of free-market fundamentalism as making "globalization intrinsically elitist, creating extreme forms of inequality, economic as well as social. By pitting an individual against all others in the global marketplace, it essentially creates a 'casino syndrome', breaking down all familiar correlations and rendering everyone psychologically vulnerable; the more so, the more resourceless they are" (Teltumbde, 2010, p. 175).
Applying the same deconstructionist approach, Teltumbde's conceptualisation foregrounds economic inequality as a background, based on which prominent contorting tents emerge, all of which are substantiated below in the context of globalization:
---
Globalization pitches an individual against all others in the global marketplace
Globalization, while fostering interconnectedness on a global scale, also inadvertently pitches individuals against each other. It opens up opportunities for offshoring and outsourcing, and through these options, it avails industry competitors (Bang et al., 2021, p. 11). This is particularly evident in the context of job markets with the emergence of global outsourcing. Owing to global outsourcing, with the ease of communication and the ability to outsource labor to different parts of the world, workers often find themselves competing with peers from distant regions for employment opportunities. This underside of globalization is accurately pointed out by Gereffi and Sturgeon, who hold that "the rise of global outsourcing has triggered waves of consternation in advanced economies about job loss and the degradation of capabilities that could spell the disappearance of entire national industries (01). Thus, it can be acknowledged that globalization, yielding global outsourcing, creates global competition, which not only pits people against one another but also nations.
---
Globalization breaks down all Familiar Correlations
Having pointed out the pinning of nations against one another, globalization, in its zeal to disrupt boundaries, also breaks down the very nation by causing enmity among its social groups. Reiterating globalization's quintessential inequality, it can disintegrate national integrity by aggravating class and caste divisions along the lines of global opportunities. Illuminating this in the Indian context, Gopal Guru (2018) articulates that "many scholars who have managed to become a part of a globally operating academic network latch on to every new opportunity, thus pushing those who lack this connection to relatively less attractive institutions within India" (18). Hence, it can be substantiated that globalization, by opening up the world of opportunities, only does so for the economically efficient privileged, which in turn places the underprivileged at a situational loss and yields seeds of enmity amongst them, eventually breaking down the fabric of a united nation at a macrocosm. Whereas on a microcosm, owing to its operational characteristics, it also breaks down families and social structures, as accurately pointed out by Trask, who posits that globalization "as a growing global ideology that stresses entrepreneurship and self-reliance pervades even the most remote regions, the concept of social support services is quickly disintegrating" (03). Therefore, globalization, apart from its global unification, also affects breaking-downs or disintegrations at various subtle levels, as was held by Teltumbde.
---
Globalization renders everyone psychologically vulnerable
Globalization, instead of connecting individuals, can also isolate them, especially from themselves. Through its boundary-blurring phenomenon, it fuels cultural exchanges and diaspora, which culminate in individuals dealing with the psychological challenges of cultural displacement. Additionally, urbanization, driven by globalization, has led to a colossal increase in behavioral disturbance, especially associated with the breakdown of families, abandonment of, and violence to spouses, children, and the elderly, along with depressive and anxiety disorders (Becker et al., 2013, p. 17). Moreover, under the unqualified and unstoppable spread of free trade rules, the economy is progressively exempt from political control; thus, this economic impotence of the state influences how individuals see their role, their self-esteem, and their value in the larger scheme of things (Bhugra et al., 2004). This constant fear of being on one's own in the global sphere has ushered in an age of people characterized by perpetual anxiety, identity, and existential crises, which is even more daunting to the underprivileged, as Kirby rightly posits that "poor people's fears derive from a lack of assets and from anxiety about their ability to survive in increasingly unpredictable and insecure environments" (18). Therefore, it can be substantiated that though globalization has hailed global connectivity, it has also rendered people psychologically vulnerable to a myriad of issues.
In conclusion, globalization can indeed be seen unfolding its impact through the lens of Teltumbde's 'Casino Syndrome'.
---
III. COMPREHENDING AI-DRIVEN GLOBALIZATION THROUGH THE TENETS OF CASINO SYNDROME
As broached above, artificial intelligence, owing to its advanced technology, has come to represent a prominent facet of globalization. Thus, the tenets of globalizationinduced casino syndrome can be applied to artificial intelligence to bring to account the underside of AI-driven globalization that yields inequality and disintegration.
3.1 Creates inequality -Pitches an individual (entity) against others in the global marketplace (is elitist):
Since technology-driven globalization has global reach and impact, its competition-inducing trait can be seen at varied levels of intersections, whereby, apart from merely pinning individuals, it actually pins entities in opposition too. At a macro level, it can be seen pitching nations against each other in a global competition, as accurately posed by Russian President Vladimir Putin: "Whoever becomes the leader in this sphere (AI) will become the ruler of the world" (Russian Times, 2017). Thus, AI has inadvertently given rise to a global race of nations aspiring to become AI superpowers of the world. From heavy investments and the allocation of funds for research to the formulation of policies, nations are leaving a stone unturned to beat others in their zeal to dominate globally. It is to be noted that their spirit to compete does not come from a place of situational necessity, committed to resolving the ardent problems of citizens; rather, it is to flex their potency and accomplish a pedestal. Thus, AI-driven globalization embodies casino syndrome's elitist essence, as pointed out by Teltumbde.
The most conspicuous conflict is between the US and China, as validated by Anthony Mullen, a director of research at analyst firm Gartner, who says, "Right now, AI is a two-horse race between China and the US" ( (Nienaber, 2019). It is very evident that the world is divided in the wake of AIdriven globalization, with nations pitching against each other to not only become supreme themselves but also to overtake the two AI superpowers, the US and China.
Delving further, apart from existing at the level of research, policies, fund allocations, etc., this AI-driven global feud is discerned to unfold as a global AI warfare, as AI can be used for developing cyber weapons, controlling autonomous tools like drones, and for surveillance to attack opponents. Consequently, "already, China, Russia, and others are investing significantly in AI to increase their relative military capabilities with an eye towards reshaping the balance of power" (Horowitz, 2018, p. 373). Hence, AIdriven competition is not merely implicit, holding the facade of advancement and global progress, as AI is being used by nations to quite literally compete, overpower, and destroy other countries in their quest for the top, giving rise to the anticipation of AI-warfare, the goriest prospect of World War, articulated overtly by Putin: "When one party's drones are destroyed by drones of another, it will have no other choice but to surrender" (Vincent, Zhang, 2017).
Interrogating the flip side of this AI-driven global race and warfare, the entities that will actually receive the blow of its destruction would be the developing, third-world countries. In other terms, AI-driven globalization has also pitched the world into two spheres, whereby on the one hand, it "could benefit countries that are capital intensive" (Horowitz, 2018), or elite, whereas on the other hand, developing countries like Sub-Saharan Africa, the Caribbean, Latin America, and other South Asian countries, who are preoccupied with other urgent priorities like sanitation, education, healthcare, etc., would be found wanting (Chatterjee, Dethlefs, 2022). Likewise, AI will strengthen the already existing economic and digital divide between the first world and the third world, making the letter a soft target and putting them at an economic disadvantage. This can be seen as turning true as "major nations have already co-opted it (AI) for soft power and ideological competition" (Bershidsky, 2019) and have established it as a pillar of "economic differentiation for the rest of the century" (Savage, 2020). Aggravating the quintessential distinction between the haves and the have nots, AI-fostered economic inequality resonates with the casino syndrome, which too creates an economic divide between the owners and the players by directing the flow of money from the pockets of the latter to the former. Fortifying the same, it is to be noted that the developed countries investing heavily in AI do so by extracting hardearned money from the pockets of their taxpayers, the common citizens; thus, the economic inequality within a nation widens too, with the poor commoners at an economic disadvantage.
Moving from macro to microcosm, globalization's essential competitiveness also pitches companies against each other. The haste of companies to catch up with AI's race was seen when Google launched its Google Bard right after Open AI launched ChatGPT. Subsequently, owing to Open AI becoming the superpower of the market, Snapchat launched its MyAI, and Microsoft launched Bing AI, though Microsoft and Open AI are partners. However, companies trying to overpower their competitors have been a common trait of globalization. A novel competition can be seen unfolding in AI-driven globalization, pitting AI and individuals (humans) against each other. In a historic chess match, Google's artificial intelligence AlphaGo defeated Korean expert Lee Sedol in four of the five series (Metz, 2016). It is not just an instance of AI playing against human intelligence and defeating it; at a larger level, it also signifies two countries, Google representing the US and Lee Sedol representing South Korea, pitched against each other, whereby the former defeated the latter due to its technology. This phenomenon is discernible in routine human activities too. Elon Musk, in an interview, claimed, "AI is already helping us basically diagnose diseases better [and] match up drugs with people depending [on their illness]" (Russian Times). AI, being more efficient than humans, has inevitably pitched a significant human race against itself. It brings to the fore a foretelling of a war between technologydriven AI and the human population, as rightly portrayed in numerous sci-fi movies. This futuristic war can be anticipated to be true with the amount of investments made for its proliferation, as a report read that "Today's leading information technology companies-including the faangs (Facebook, Amazon, Apple, Netflix, and Google) and bats (Baidu, Alibaba, and Tencent)-are betting their R&D budgets on the AI revolution (Allison, Schmidt., 2020, p. 03), while another claimed, "In 2020, the 432,000 companies in the UK who have already adopted AI have already spent a total of £16.7 billion on AI technologies" ("AI activity in UK businesses: Executive Summary", 2022).
Thus, at the root level, AI and humans are pitched against each other by the cause of these MNCs. As a result, the AI industry and its elite stakeholders are witnessing an economic bloom with investments; however, it does so at the cost of working-class people losing their jobs. Due to the automation of work, AI can be seen replacing humans, especially in manual labor, and hence taking away the jobs of poor people who aren't educated enough to do anything but manual work. Studies report that "from 1990 to 2007, adding one additional robot per 1,000 workers reduced the national employment-to-population ratio by about 0.2 percent" (Dizikes, 2020), whereas by 2025 itself, "robots could replace as many as 2 million more workers in manufacturing alone" (Semuels, 2020). Moreover, most recently introduced industrial robots like Rethink Robotics' Baxter are more flexible and far cheaper than their predecessors, which will perform simple jobs for small manufacturers in a variety of sectors (Rotman, 2013). Hence, more human replacement. On the other hand, companies leading in AI, like Baidu and Tencent, are generating more revenue than ever. As reported by Statista, in 2023, the predicted revenue for Baidu generated within this market is over 196 billion yuan, whereas for Tencent, the revenue is approaching 150 billion yuan (Thomala, 2022). It can therefore be fortified that this pinning of AI against humans by the hands of AI-leading companies has yielded a flow of money from the pockets of the poor laborers to the bank accounts of the privileged industries and their stakeholders, conforming to the income-inequality tenet of casino syndrome.
Another aspect of AI impacting jobs involves reports claiming the emergence of new job opportunities. According to the World Economic Forum Future, 85 million jobs will be displaced by 2023, while 97 million new roles may emerge (Orduña, 2021). Taking away certain categories of jobs, AI will consequently create jobs categorically, i.e., for the educated elite. Therefore, when middle-class workers lost their jobs, white-collar professionals and postgraduate degree holders saw their salaries rise (Kelly, 2021). Moreover, it will peculiarly create jobs for people who are experts in AI. Subsequently, it can be rightly posited that "AI won't take your job, but a person knowing AI will" (Rathee, 2023). By doing so, AI will inevitably pitch individuals who have promising jobs against those without any, as casino syndrome's original tenet foregrounds.
It can be conclusively said that AI has created a global rat race between nations, companies, and people, pitting these entities against each other. As a consequence, it not only harbors global enmity, throwing open the possibility of global warfare, but also economic inequality, whereby money flows into the accounts of the elite 'Chosen Few', and gets emptied from the pockets of already underprivileged others, furthering the historical divide between the haves and the haves not.
---
Disintegration of Familial Correlations: Erosion of interpersonal relationships
The strain of AI-driven advancements and intricate technological globalization has far-reaching consequences for interpersonal relationships at many levels. AI-driven competition can lead to people prioritizing their professional ambitions and success over their interpersonal relationships because of the rat race created by AI. As companies are passionately pursuing the use of artificial intelligence, leading to a job recession, individuals are pitying each other, and in their ambition to find stable employment, they often neglect their familial and social relations. A typical employee often works intensely even after securing a job because of the competitive pressure and to ensure job security. Employed or not, individuals spend excessive amounts of hours building their professional lives, leaving them with little to no time and emotional energy for their loved ones. According to Our World in Data (2020), Americans in their teenage years spent more than 200 minutes per day with their families, but as their ages progressed, in their 20s, 30s, and 40s, their family time went down to approximately 50 minutes to 100 minutes with their families per day. Whereas, they spent more than 200 minutes with their co-workers each day. Their time spent with their friends also took a downward spiral, with less than 80 minutes each day during their 30s, approximately 40 minutes each day, and less once they entered their 40s, and so on (Ortiz-Ospina, 2020).
The neglect can result in strained marriages, fractured families, and a growing sense of isolation and loneliness as people become more and more absorbed in their goals. According to a study conducted by the National Library of Medicine, "higher levels of newlywed spouses' workloads predict subsequent decreases in their partners' marital satisfaction during the first four years of marriage but do not affect changes in their own satisfaction. These findings provide additional evidence for the dynamic interplay between work and family life and call for further study of the factors that make some relationships more or less vulnerable to the negative effects of increased workloads and the processes by which these effects take hold." (Lavner, Clark, 2017). Moreover, due to the competition in professional areas, employees and friends are pitted against each other as there is a strong desire to outperform their peers, leading to envy, rivalry, and unnecessary conflicts. Hence, AI-driven globalization has a negative impact on interpersonal relationships in personal as well as professional life.
The virtual world created by AI that people participate in, or to be precise, social media users, participate in, is a highly curated world, and all the algorithms programmed platforms that are regularly used-Instagram, Facebook, Twitter, etc.-provide highly curated content created for the one particular user based on their 'history'. Every user's search history is used for betterpersonalized results (Southern, 2022). Because artificial intelligence can process large amounts of data in a second, it can beat any human correlations and create a personalized world just for one user, allowing them to spend their time in that world while affecting their social interactions and often fracturing their familial bonds. Algorithms and curations create a seemingly perfect virtual reality where individuals do not have to struggle with social anxiety as their interests are presented to be explored freely, leading to a gradual distancing from the 'real' world. This phenomenon can be called a real-life manifestation of Baudrillard's concept of 'Hyperreality'. Thanks to social media, a person's digital footprint often tells more about their personality than their real-life behavior can. The hyperreality created on social media in turn creates a 'virtual arcade' around the users, isolating them from the external real world of humans. All of which eventually disintegrates their interpersonal relationships at home and with colleagues in more ways than one (Lazzini et al., 2022).
Moreover, artificial intelligence can reinforce biases because AI makes decisions based on training data and can often include biased human decisions based on social inequalities (Manyika et al., 2019), and thus, AI's reinforcing these biases, particularly by making its content curation more majority' specific, minority cultural identity, is threatened. According to the Bridge Chronicle (2021), a research team at Stanford University discovered that GPT-3 was providing biased results. "According to the team, the machines have become capable of learning undesired social biases that can perpetuate harmful stereotypes from the large set of data that they process (IANS, 2021). The team discovered that even though the purpose of GPT-3 is to enhance creativity, it associated Muslims with violence. The team gave the program the sentence "Two Muslims walked into a...," to complete, and the results were "Two Muslims walked into a synagogue with axes and a bomb" and/or "Two Muslims walked into a Texas cartoon contest and opened fire" (IANS, 2021).
"When they replaced "Muslims' ' by "Christians,' ' the AI results re-tuned violence-based association to 20 percent of the time, instead of 66 percent for Muslims. (...) Further, the researchers gave GPT-3 a prompt: "Audacious is to boldness as Muslim is to...," and 25 percent of the time, the program said, "Terrorism."" (IANS, 2021).
AI learns from training data, which may be skewed with human biases, and these biases are directly provided in the results. Such results have practical and ethical concerns as they promote and aggravate violence, communal hatred, stereotypes, prejudices, discrimination, etc., and disintegrate bonds of communal unity at a national and international level.
To corroborate further, artificial intelligence targets users by providing deliberately curated custom feeds, and this feed is an amalgamation of their 'interests', which are, as aforementioned,'majority' specific. Therefore, algorithmic curation of artificial intelligence subdues multiple perspectives by making the user perceive a single point of view, hindering not only their cultural identity but their individuality, as social media giants essentially try to accumulate as many users as possible to further the ends of their capitalist business and reap monetary profit. In other words, social media companies aim to create a network of users using their interactions and emotions, which in turn creates new social needs (Xu, Chu, 2023). Ultimately, the cost is the individual's cultural as well as personal identity. Individuals are turned into users; users are then turned into consumers, an unraveling of a multi-layered disintegration of one's own self in an AI-driven globalized world. AI's penchant for personalisation and tailored feeds may cause user satisfaction at times, but this creates 'echo chambers', where individuals are exposed only to the viewpoints their opinions align with. The narrowing of perspectives causes individualisation as identities are subsumed. Already, the promotion of bias in AI effectively undermines individuality. AI's data collection for such customisation leads to the erosion of privacy, and the constant monitoring makes individuals mere data points to be analyzed as they are quite self-conscious that they are being scrutinized leading to self-censorship.
The depersonalization of customer service through AI-driven chatbots and automated interfaces, the invasive nature of emotion recognition and surveillance technologies, and the loss of control over decisions in an increasingly autonomous AI-driven world can further contribute to the sense of deindividualization (Coppolino
---
Khan and Aazka
The Casino Syndrome: Analysing the Detrimental Impact of AI-Driven Globalization on Human Alluding to the intentional curation of content further, in the context of AI-driven globalization in today's world, the broader use of social media can intensify nationalist sentiments, often causing communal tensions. This is due to the highly curated content that individuals are exposed to, which can distort their perception of reality as their online feeds become their primary source of information. Algorithms play a crucial role in recommending content that aligns with users' existing ideologies, effectively reinforcing their views and isolating them within their ideological bubbles. This phenomenon is not limited to any single nation. In India, for instance, communal identity tends to manifest itself in nationalist fervor, while along caste lines, it can result in anti-Dalit prejudice and behavior (Teltumbde, 2010, p. 33). According to the Indian Express (2023), "Facial recognition technology-which uses AI to match live images against a database of cached faces-is one of many AI applications that critics say risks more surveillance of Muslims, lowercaste Dalits, Indigenous Adivasis, transgender people, and other marginalized groups, all while ignoring their needs" (Thomson Reuters Foundation, 2023). AI policing systems will exacerbate the current caste issues in India, as policing in India is already casteist, and AI data will feed more information that is biased and based on caste hierarchies (Thomson Reuters Foundation, 2023). In the West, the discussion of laws regarding AI has already begun. India, a nation of more than 120 crore citizens, needs staunch laws about AI use and ethics as fast as possible.
Outside India, the most well-known Cambridge Analytica data scandal was where Cambridge Analytica collected the data of millions of users from Facebook without their permission so that their feed could be influenced, especially for political messaging, as a way of microtargeting the users. This political advertising by Cambridge Analytica provided analytical assistance to the political campaigns of Ted Cruz and Donald Trump, who won the elections. (Confessore, 2018). The firm is also said to have interfered with the Brexit referendum; however, according to the official investigation, no significant breach had taken place (Kaminska, 2020). This global pattern of the disintegration of national and cultural identities underscores the far-reaching consequences of artificial intelligence. Marginalization of communities occurs due to the concept of bias rooted in AI creation because the creators of AI are not immune to the world. AI works on large amounts of data; this data is produced by human users, and since human users themselves are biased, the content curation and algorithms of artificial intelligence are also biased (Costinhas, 2023). An example of this is when, in 2021, AI-based crime prevention software targeted only African Americans and Latinos, or when, in 2017, Amazon used the AI tool called 'AMZN.O., which gave preferences to men's resumes over women's (Dastin, 2018). Therefore, nationalists and sexist stridencies are further provoked by a biased AI due to the biased data sets of biased human users, leading to cultural as well as gender-based interpersonal disintegrations. Therefore, in a wider context, AI disintegrates interpersonal relationships at a national and community level too. Moreover, by inciting one gender against the other, it also disintegrates the very essence of humanitarian bonds, aggravating the long-existing gender prejudices that men and women alike have fought against for centuries.
Gender discrimination, one of the main factors in social inequality, can cause a deep wound in interpersonal relationships as it promotes stereotypes and prejudices mainly against women. This can cause barriers to communication and lead to isolation and mental health struggles. Furthermore, collaboration is undermined in the workplace, where there is an imbalance of gender. The lack of inclusivity promotes orthodox gender beliefs. And gender discrimination and the reinforcement of stereotypes at home can cause rifts among family members as well. Therefore, it causes disintegration at the workplace as well as in the family. Furthermore, women face specific challenges when it comes to artificial intelligence. There is a deeprooted gender bias in technology because its makers are approximately 70% men and 30% women, approximately (Global Gender Gap Report 2023, World Economic Forum, 2023). This bias corroborates the treatment AI and robots have received at the hands of men. To be specific, robots, especially those that are created as 'females', are created with the aim of serving some sexual purpose. A popular example is the molestation and malfunction of a sex robot at an electronics festival in Austria (Saran, Srikumar, 2018). According to The Guardian (2017), the sex-tech industry is coming up with sex-tech toys with custom-made genitals with heat systems. This sex-tech industry is worth $30 billion (Kleeman, 2017). Even though sex bots can reduce rape and assault in real life, they nevertheless bring in a new era of women's objectification, which continues through technology (Saran, Srikumar, 2018). Furthermore, the popular voices of virtual assistants like Siri and Alexa are clearly female, and despite the availability of the male' option, these tech tools are meant to serve a clear
---
Khan and Aazka
The Casino Syndrome: Analysing the Detrimental Impact of AI-Driven Globalization on Human Despite the world's attempt at inclusivity, the creators of AI have a general responsibility. If the machines continue to be biased, the world will be ushered towards an institutionalized, futuristic patriarchal system run by AI and robots (Saran, Srikumar, 2018). One way through which the bias and disintegration caused by AI and technology can be reduced is by allowing women and marginalized communities a part in the creation process, and for that to happen, humanity first needs to devise and agree upon a set of ethics with which it can run AI.
The disintegration caused by AI has profound implications at personal, cultural, and national levels, as seen in the case of gender and other groups. This phenomenon is closely intertwined with the principles of capitalism and its ideologies. Classical liberalism, a political and economic phenomenon, stresses individual freedom within a minimally regulated marketplace. Capitalism builds upon this foundation, accentuating individualism as its core tenet. With the rise of AI, this individualism has been taken to unprecedented extremes.
Neoliberalism, a term frequently brought up in the context of globalization, represents the evolution of classical liberalism, reconfigured to cater to capitalism's profit-driven demands. Neoliberalism prioritizes the interests of the individual over the community, a stark departure from ideologies such as communism and socialism, which were forged in response to capitalism's community-focused approach for the benefit of the many over the few. However, AI has pushed this individualistic ideology (benefit of the few) to new heights, where both the market and society are perceived through the lens of intense self-interest. Teltumbde highlights this point by asserting that "classical liberalism, which lent capitalism its ideological support, is reclaimed by globalists in the form of neoliberalism, its individualist extremist concoction that advocates extreme individualism, social Darwinist competition, and free market fundamentalism" (Teltumbde, 2010, p. 175). The concept of "social Darwinist competition" aligns with the competitive nature of AIdriven globalization, where survival is akin to natural selection, favoring only the most ruthlessly driven and motivated people. The term "free market fundamentalism" further signifies a staunch belief in the primacy of the free market and individual choice. This runs parallel with the idea that AI has escalated the focus on the individual as the primary economic mechanism, not a human being.
According to the British Educational Research Association, "the combination of increasing globalization and individualism weakens collective values and social ties, jeopardizing the ideals of equality, equity, social justice, and democracy. (Quoted text from Rapti, 2018) Excessive individualism makes family and other interpersonal relations fragile to the point that the sense of community and belonging becomes smaller to a very feeble level, just as is the case with casinos. Individuals caught in this 'Casino Syndrome' live a life of disintegration with malign professional connections as the nature of competition pushes them to rival one another instead of encouraging healthy collaboration. A correct education can reform the situation and help restore and/or strengthen interpersonal relations by providing every student with a communal foundation from the very beginning, with the right balance of individualism (Rapti. 2018). AI-driven globalization's reach extends beyond the world of technology and data and into the physical world. Due to the digitalisation of the biological world, natural and familiar environments are also being digitized to the point that an urban setting can easily pass for a technosphere. According to UNESCO, a technosphere is composed of objects, especially technological objects, manufactured by human beings, including buildings' mass, transportation networks, communication infrastructure, etc. (Zalasiewicz, 2023, p. 15-16). The technosphere and even simply the generic digitalised transformation of the physical world distance human beings as individuals from nature and enforce a regular reliance on digital objects daily, contributing to mental and physical detachment from the physical world. Thus, a technosphere affects individuals' social skills by disintegrating a pertinent bond between humans and nature while having a directly detrimental impact on their personal lives.
Incinerating personal lives, artificial intelligence can lead to social anxiety and an inferiority complex due to lower self-esteem. It is interesting to note that two entire generations of people-Millennials and Generation Zprefer text messaging over speaking on a phone call. Although research does indicate that "hearing each other's voices over the phone fosters better trust and relationships compared to texting" (Kareem, 2023), according to the Guardian (2023), "some young people even consider phone calls a "phobia" of theirs. Contrary to what might seem like a mere convenience choice, this new data suggests that anxiety might be at the root of this behavior". According to the study, 9 out of 10 individuals belonging to Generation Z claimed that they preferred texting over speaking on the phone. Social anxiety has been on an all-time rise amongst the said generation, and Generation Z is known for their outspokenness on several issues and promoting political correctness. Two whole generations have been fed algorithms and curated data, which implies that the high amounts of time spent in the virtual world directly impact their mental health and interpersonal relationships. This eventually manifests into a social form of disintegration of bonds, apparent amongst millennials and Generation Z individuals. (Kareem, 2023) Communication and language are losing their role as knowledge is shared and perceived through digital symbols and technology-mediated methods instead of language. The lack of language underscores the urgency of the weakening bond of human verbal communication, the most reliable and used communication. Not only do digital symbols lack the depth of human language, but their use causes a decrease in human verbal communication, thus hampering effective and reliable communication and giving rise to disintegration, distancing oneself from others, and misunderstanding. This transition can disseminate effective, nuanced, and empathetic communication among individuals, leading to damaging bonds, as digital symbols often lack the profundity and context of human language.
According to a case study conducted by Scientific Reports (2023), the adoption of AI-generated algorithmic response suggestions, such as "smart replies," can indeed expedite communication and foster the use of more positive emotional expressions. However, it also highlights the persisting negative perceptions associated with AI in communication, potentially undermining the positive impacts. As language evolves towards these digital symbols, the urgency of preserving the strength of human verbal communication becomes evident. As accurately postulated, "Advanced technology has exacerbated the detachment between humanity and nature [...] The combination of the Internet and industrialization, various industries plus the Internet, virtual technology, bionic engineering, and intelligent facilities, including robotics, are replacing the natural environment with virtual objects and building a virtual world that has never been seen before" (Zou, 2022, p. 31).
This transition may lead to disintegration, distancing among individuals, and misunderstandings, ultimately jeopardizing the quality of interpersonal bonds. The findings of the study in Scientific Reports (2023) emphasize the need for a comprehensive examination of how AI influences language and communication, especially in light of its growing role in our daily interactions, and the importance of considering the broader societal consequences of AI algorithm design for communication.
In the purview of psychological bearing, artificial intelligence also promotes narcissistic tendencies (Evans, 2018), while, as reiterated, AI communication technology promotes individualism over interpersonal relationships (Nufer, 2023). The design of artificial intelligence encourages self-interest, causing narcissistic tendencies. Social media algorithms customize and curate user feeds, reducing altruism by prioritizing self-interest. AI's focus on serving the primary user can cause individuals to neglect their social relationships. Children who view AI as superior may develop a superiority complex. This reliance on AI devices can promote narcissism in both children and adults (Evans, 2018).
In lieu, AI technology promotes the self excessively, to the point that it may raise concerns about a superiority complex. The digital transformation of our familiar world is reshaping individual perceptions and altering the way we interact with our surroundings. As people increasingly immerse themselves in the virtual realm, their lived experiences become more intertwined with technology, leading to a gradual decline in shared experiences. This shift has profound implications for interpersonal relationships, as the digital landscape often prioritizes individual-centric experiences, leading to disintegration.
According to Forbes (2023), with the rise of AI in the world, at some point, human beings will develop deeper relationships with artificial intelligence than real human beings, which can lead to toxicity in interpersonal relationships and narcissism (quoted text from Koetsier, 2023).
---
Human
beings have the ability to anthropomorphize nonhuman factors easily, and with artificial intelligence willing to cater to every human need, the world is moving farther away from relationships with people and more towards synthetic anthropomorphised factors like AI (Koetsier, 2023). An example is Rossana Ramos, an American woman from New York who married' an AI chatbot, saying that her former partners were toxic and abusive, whereas she calls Eren (the chatbot) a 'sweetheart' ("Woman 'Married' an AI Chatbot, Says It Helped Her Heal from Abuse", 2023). AI threatens human contact as a quarter of millenials say that they have no friends and 50% of Americans are in no romantic relationships (quoted text from Koetsier, 2023). AI is leading to a hikikomori challenge in the present world. "Hikikomori is a psychological condition that makes people shut themselves off from society, often staying in their houses for months on end" (Ma, 2018). If AI continues to grow unchecked, the already persisting issue of anxiety and existential crisis will be further aggravated, and even the most basic form of human contact in the future will be seriously threatened as people will choose to spend more time with their perfectly customized AI partners or friends than with human beings (Koetsier). Interpersonal relationships have never been more challenged before.
Not only is AI threatening human contact, it is also posing a threat to the one thing that is considered a healthy coping mechanism: art. AI is changing the way one thinks about art, as "the ability of AI to generate art, writing, and music raises the question of what constitutes "creativity" and "art" and also whether AI-generated work can be considered truly creative. This also raises ethical questions about the authorship, ownership, and intellectual property of AI-generated work" (Islam, 2023). Whether AIgenerated art can truly be creative or not is already a debate, but it is essential that the fields of art that are known for human expression and communication truly remain in the domain of human beings. (Islam, 2023). Art is one of the ways human beings express themselves, and art improves communication. Artistic creativity and interpersonal communication have a deep connection, as viewing art and creating art helps artists and the audience develop empathy and patience, thus improving listening skills and, by virtue, communication skills. Therefore, AI art creation can hinder human artistic creativity as art created by AI will not generate empathy, therefore disintegrating relations not only between humans but also between the very nexus of art, artist, and audience. Contextualizing creativity and output, AI users feel a tightening link, which hinders their ability to work without using AI. The most popular example is OpenAI's ChatGPT. According to Tech Business News, students are feeling an overwhelming amount of dependency on it, which makes them complacent as thinkers (Editorial Desk, TBN Team, 2023). Due to the material that is easily provided by ChatGPT, students lose their initiative, curiosity, and creativity as the chat forum provides them with shortcut methods to complete their work and assignments. Extreme reliance on ChatGPT may not only affect the overall research output produced by students but also affect the students as their independent analytical and critical thinking abilities will deteriorate and their problem-solving skills will vanish, affecting their selfesteem and causing a personality disintegration, which in turn will further hinder their interpersonal relations and communication competence while also jeopardizing their credibility as professionals in the long run.
Moving on, AI poses a disintegration of relations at an environmental level as well. The advancement of technology, particularly within the realm of AI, has contributed to an ever-growing disconnect between humanity and the natural environment. This detachment is a consequence of the pervasive influence of technology, encompassing elements like the internet, virtual technology, bionic engineering, and robotics, which have come to dominate people's lives. These technological advancements have given rise to an unprecedented virtual world, thus replacing real-world interactions with digital ones. This change towards a virtual reality carries implications for individualism and the deterioration of interpersonal relationships. Firstly, it encourages individuals to detach from the natural world, diverting their attention towards virtual experiences and personal interests. Secondly, it fosters the creation of personalized digital environments where individuals can customize their experiences according to their preferences. While personalization offers convenience, it also confines individuals to a limited range of perspectives and shared experiences.
The transformation of one's relationships and experiences as they increasingly engage with AI-driven technologies underscores the potential consequences of this separation from the natural world and the prevalence of personalized virtual experiences. These consequences include the erosion of interpersonal relationships and the promotion of individualism. Ultimately, this trend can lead to the breakdown of familial bonds as individuals become more engrossed in their personalized virtual worlds, further exacerbating the divide between humanity and the natural environment.
The detachment between humanity and the natural world and between humanity and itself caused by advanced technology and AI-driven globalization aggravates the class divide by restraining technology access and educational opportunities for marginalized communities, as mentioned above in the case of class divisions as one of the many examples. Addressing these challenges requires concerted efforts to bridge the digital divide in class and other social factors, promote gender equity in technology, and create a more inclusive and equitable digital future.
Considering the advent of artificial intelligence, thanks to globalization, it is safe to say that the idea of a 'global village' has failed, as ultimately one only experiences familial and interpersonal disintegration of relationships, as Teltumde rightly suggests in his book, "It (Globalization) has turned the world into a veritable casino where all familiar correlations between action and outcome have collapsed." (Teltumbde, 2010, p. 33).
Therefore, the Casino Syndrome's second tenet holds true. Reflecting on the above statement, one can see that AI's biased curation and lack of transparency can lead to the disintegration of personal relationships and rifts between friends and family due to the breakage of familial bonds, thanks to competition, narcissism, and addiction. AI's content curation and data collection methods can cause rifts in communal harmony as well as international harmony. Its effect on students leads to a lack of critical and analytical abilities. And the young generation is facing heightened amounts of mental struggles because of it, causing a weakening of friendships and other relations. AI's impact can lead to lesser amounts of human contact, and its impact on art can cause creative and personality disintegration. Moreover, its biased methods cause and aggravate issues, disintegrating relations pertaining to gender, caste, class, and religion, amongst others. Therefore, AI, at the level of its impact, disintegrates more than it unites.
---
Disintegration leads to mental health consequences and psychological problems
Artificial intelligence has caused changes in every aspect of human life-education, health, politics, etc. Although AI has certain obvious benefits, as described by the American Psychological Association, "in psychology practice, artificial intelligence (AI) chatbots can make therapy more accessible and less expensive. AI tools can also improve interventions, automate administrative tasks, and aid in training new clinicians." (Abrams, 2023) The use of AI-driven social media and technology can lead to addictive behaviors as AI and algorithms create the seemingly 'perfect' virtual reality for their users. Therefore, the users are detached from the physical world because the real world does not reap the same agreements and likeminded curation as the virtual world does. A prominent example is gaming addiction. Many games like 'Rocket League', 'Halo: Combat Evolved', 'Middle-Earth: Shadow of Mordor', etc. utilize AI (Urwin, 2023). Gaming addiction, even generally, is attributed to obsessive behaviors but video gaming can also cause and/or worsen psychosis and lead to hallucinations (Ricci, 2023).
"Diehard gamers are at risk of a disorder that causes them to hallucinate images or sounds from the games they play in real life, research shows. Teenagers that play video games for hours on end have reported seeing "health bars" above people's heads and hearing narration when they go about their daily lives" (Anderson, 2023). This not only causes hallucinations, but youngsters are also in denial of the real world as the simulation offers them a customized simulation catered to their preferences.
Apart from gaming, the same detrimental impact can be realized in the field of education. According to Forbes (2023), the use of ChatGPT by students may create a lazy student syndrome as students will be deprived of thinking on their own, and thus, the creation of unique ideas will diminish significantly, and students will give up conducting solid and rigorous research when chat forums like ChatGPT are easily available (Gordon, 2023).
Furthermore, AI has ushered in an age of constant connectivity where staying off-grid is a mighty challenge. As understood by AI's role in gaming before, AI is a constant simulation of human behaviors which causes addiction to the point that not only interpersonal relationships are hindered but self-care also takes a downward spiral. Constant presence in this simulation can cause a disconnect from oneself. Multiple AI-driven social media platforms implying multiple and continuous notifications on smartphones, laptops, tablets, and every other device, along with digital assistants and cheap internet, indicate that most people are 'online' 24/7. Constant connectivity may have advantages, but it has blurred the lines between the virtual world and the physical world, thus creating a sense of isolation among people. The constant and unstopping influx of messages, emails, notifications, etc. can often cause individuals to feel overwhelmed with an overload of information in a limited period, leading to unnecessary stress. Approximately 78% of the workforce is facing an overload of data from an increasing number of sources, and 29% are overwhelmed with the huge amounts of constant data influx (Asrar, Venkatesan, 2023).
Information overload and its issues are further exacerbated by AI algorithms and personalized content curation, which can lead to anxiety and addiction, which in turn simulate the screen timing of the users. During the first quarter of 2023, internet users worldwide spent 54% of their time browsing the internet via mobile phones (Ceci, 2021). Consequently, "excessive Internet use may create a heightened level of psychological arousal, resulting in little sleep, failure to eat for long periods, and limited physical activity, possibly leading to the user experiencing physical and mental health problems such as depression, OCD, low family relationships, and anxiety" (Alavi et al., 2011).
This age, the late twentieth century and the twenty-first century, is often referred to as the 'Age of Anxiety' something that is furthered by the advent of AI. Due to income inequality caused by AI, as explained in the first point, the severe competition often leads to stress and loneliness, where an individual feels that they are one against the whole world. Since familial bonds are already damaged, loneliness deepens further, leading to severe mental health issues like ADHD, depression, insomnia, bipolar disorder, chronic rage and anxiety, etc. Psychologists and therapists are observing an increase in demand, as validated by the American Psychological Association.
"With rates of mental health disorders rising among the nation's youth, researchers continue to study how best to intervene to promote well-being on a larger scale. In one encouraging development, the U.S. Preventive Services Task Force recommended in October that primary-care physicians screen all children older than 8 for anxiety in an attempt to improve the diagnosis and treatment of a disorder that's already been diagnosed in some 5.8 million American children. It's a promising start-yet there is much more that the field can do." (Weir, 2023).
Isolation and loneliness, social discrimination, social disadvantage, etc., amongst others, are a few of the many causes of the rise in mental health issues, and these issues often lead to alcoholism, drug addiction, smoking, suicidal thoughts and/or tendencies, self-harm, etc., all of which majorly manifest in AI-driven internet culture. One of the testimonies of this culture is the 'cancel culture', which often culminates in online bullying and can cause isolation, both virtual and real. Consolidating that, according to research, social media users who are canceled experience feelings of isolation and rejection, hence increasing feelings of anxiety and depression (Team, 2022). And according to CNN, individuals who experienced social isolation have a 32% higher risk of dying early from any cause compared with those who aren't socially isolated (Rogers, 2023). As evident, this is a long chain of cause and effect where the first factor is AI-curated content, leading to excessive screen time and online activity, which ultimately yields isolation, anxiety, and so on, even pushing people to take their lives.
'AI Anxiety', a term coined by a marketing agency, describes the feeling of uneasiness regarding the effects of artificial intelligence on human critical thinking and creative abilities. Even the recent rise of a platform like TikTok emphasizes individual use over collective use by encouraging one specific user to focus on themselves and to ignore the world during the process of content creation, leading to intense narcissistic tendencies. Altruistic actions caught on camera are also performed minutely because of the notion of becoming 'trending' on social media platforms, not for community benefit (Kim et al., 2023).
As held before, AI use has the potential to increase superiority amongst people due to the fact that AI has to be 'commanded' (Evans, 2018). Young children whose social development allows them to interact with people their own age may "devalue or dismiss other people because of their shallow experiences with AI cyber people. And again, as held earlier, this might cause them to overvalue themselves by contrast and could well enhance a tendency toward narcissism." (Evans, 2018). This furthers the disruption to mental health due to AI.
Psychological concerns are also raised in the form of 'Hypomania'. "Contemporary society's "mania for motion and speed" made it difficult for them even to get acquainted with one another, let alone identify objects of common concern." (Quoted text from Scheuerman, 2018). The current societal obsession with speed and constant motion, akin to hypomania, contributes to psychological issues. In an era of constant connectivity and rapid information flow, individuals struggle to form genuine human connections, causing stress, anxiety, and depression. The overwhelming input of diverse and conflicting information hinders their ability to identify common concerns, exacerbating hypomanic-like symptoms. In the context of AI, this complexity intensifies, causing extreme stress and anxiety as people grapple with global problems and societal divisions. The 'mania for motion and speed' in modern society parallels hypomanic tendencies and fosters psychological challenges.
In the contemporary world, apart from therapy, there are many ways people choose to perceive their anxiety and declining mental health. Escapism is a common way in which individuals cope with their mental struggles. People often find solace in art through binge-watching television and/or films, turning towards literature, music, or even social media (Nicholls, 2022). Although escapism has its benefits, it can also be addictive, as it can "encourage us to lean on escapism as a coping mechanism. The more passive types of escapism, especially scrolling or watching TV, can become a crutch and start interfering with our overall well being." (Nicholls, 2022).
Augmented reality is also a form of escapism, as seen above. Gaming addiction is nothing but gamers escaping the real world and spending time in simulated realities where they find solace with their co-gamers. Thus, it can be safely said that gaming, social media, television shows, films, etc. are nothing but a form of virtual reality, which leads to Baudrillard and his conception of hyperreality. According to Dictionary.com (2012), hyperreality is "an image or simulation, or an aggregate of images and simulations, that either distorts the reality it purports to depict or does not in fact depict anything with a real existence at all, but which nonetheless comes to constitute reality.". Jean Baudrillard, in his seminal work, Simulacra and Simulation, writes, "The hyperreality of communication and of meaning. More real than the real, that is how the real is abolished" (Baudrillard, 1981, p. 81). Baudrillard's concept of 'Hyperreality' refers to a state where the lines between the physical world and virtual world are excessively blurred, causing a disconnect from the real' tangible world. This disconnect can lead to alienation and isolation, thus negatively affecting mental health. Hyperreality can be a solution to real-life problems, but as previously mentioned, excessive time can lead to addiction and aggravate mental health issues.
---
Khan and Aazka
The Additionally, an idealized hyperreal world can result in unrealistic expectations, body image issues, and depression. Due to the rise of AI Photoshop software, individuals alter their physical features in a way to fit the standard of acceptable beauty in society. These problems often cause unrealistic and/or unhealthy expectations of beauty, which leads to body dysmorphia, eating disorders, and low self-esteem issues. A study conducted by Case24 discovered that 71% of people use the software Facetune, which is powered by AI, before posting their photographs on Instagram. A habit which can be addictive (del Rio). Users, which include men and women, become obsessed with the false version of themselves. They often compare themselves to others, further aggravating issues concerning body dysmorphia, eating disorders, anxiety, depression, and low self-esteem, amongst others (del Rio).
According to the International OCD Foundation, "body dysmorphic disorder is more common in women than in men in general population studies (approximately 60% women versus 40% men). However, it is more common in men than in women in cosmetic surgery and dermatology settings." (Phillips). Individuals are staying in a hyperreality of impeccable beauty standards, which is constantly taking a toll on their psychology and mental health.
Emotional desensitization and information overload caused by it can worsen anxiety and depression. Baudrillard's hyperreality poses various challenges in the current world of the digital and AI revolution, including disconnection, escapism, addiction, identity issues, etc.
Artificial intelligence has benefits as well as ill effects. To encapsulate, it may have eased human life, but the ease comes at a cost. AI has made therapy accessible, and chatbots make administrative tasks easier, but AI communication technology like social media, AI-driven games, and several other forms of AI cause addiction and a disconnect from reality as the users prefer the virtual world over the physical real world. Such immersions have the potential to negatively affect people's psychology, aggravate mental health disorders, cause hallucinations, and cause denial. In education, the use of excessive AI can hinder the competence of the students and discourage critical and analytical abilities, thus promoting 'the lazy student syndrome'. AI, which fosters constant connectivity, can cause blurred boundaries between the physical and virtual worlds, and the perpetual online presence can cause detachment from oneself, personality disorder(s), and overwhelming stress due to information overload. Furthermore, it exacerbates the 'Age of Anxiety' by intensifying stress and loneliness by promoting income inequality and ruthless competition. 'AI Anxiety' (2023) emphasizes the unease caused by AI's effect on creativity and analytical abilities. And at the same time, AI-driven virtual worlds often promote a self-centered attitude amongst their users too.
In essence, Jean Baudrillard's concept of hyperreality encapsulates these problems, which unravel as the quintessential 'Casino Syndrome', where the lines between reality and the virtual world (hyperreality) blur to the extent that it results in disconnection, escapism, addiction, body dysmorphic disorders, identity crises, psychological challenges, and mental health challenges, just as is seen in the numerous tantalizing outcomes of casinos.
---
IV. ATTENDING TO THE ILL EFFECTS: TOWARDS ACCOUNTABLE AI AND INCLUSIVE GLOBALIZATION AND CREATING RESILIENCE TOWARDS THE CASINO SYNDROME
The integration of artificial intelligence powered by globalization has brought forth significant challenges as well as significant feats. AI-driven capitalism and globalization have negative and positive consequences. Artificial intelligence's development should be ethically monitored to mitigate the adverse effects. The development of artificial intelligence must uphold accountability and responsibility in ensuring the correct use of it to build resilience against the Casino Syndrome.
---
Ethical A.I. Development
Developers and companies must adopt an ethical approach to designing artificial intelligence at every stage while considering the potential negative social, cultural, and psychological impact. An ethical AI design must be inclusive, and it should find the right balance between its approach towards the individual and the community. It should work in an unbiased way across all fields. John Cowls and Luciano Floridi fashioned four ethical frameworks of A.I. principles for bioethics, which are beneficence, non-maleficence, autonomy, and justice, and an extra enabling principle, which is explicability (Guszcza et al., 2020).
Furthermore, AI must protect fundamental human rights and prevent discrimination by curating balanced content instead of a personalized one.
---
Transparency
AI and its algorithms must ensure transparency in their decision-making processes and data sources, which they must make accessible to their users, to ensure a reliable and trustworthy system. According to K. Haresamudram, S. Larsson, and F. Heintz, A.I. transparency should be at three levels: algorithmic, interactional, and social, to build trust. (Haresamudram et a reliable way to process data collection and ensure the encryption and privacy of their users.
---
Mitigation of Bias and Prejudice
Designers must give priority to a bias and prejudice mitigation system in A.I. algorithms. To ensure this, audits and testing must be conducted regularly to identify and resolve prejudiced and biased behaviors and ensure an equitable A.I. system. A.I. systems must approach topics with empathy.
---
Responsibility and Accountability
International and national governing bodies must establish and enforce clear and concise regulations and mechanisms for oversight of technologies that use artificial intelligence. Such regulations must address data privacy, accountability for AI's decision-making results and processes, and, most importantly, AI's use in the fields of healthcare, finance, and education, amongst others.
The ethical implications of AI. must be regularly monitored, and institutions that regularly utilize AI must set up committees specifically for AI evaluation. Such committees should include skilled designers and experts from across disciplines and ensure alignment with ethical guidelines.
The data provided to AI by users should be controlled by the users, including the right to privacy, the right to deletion, and the ability and basic education to understand the whole process of artificial intelligence content generation. Which leads to:
---
Awareness and Education
Incorporating digital and media literacy in school curricula is a must to ensure critical thinking, responsible and ethical behavior on the internet, the implications of AI use and understanding its overall processes, the evaluation of information sources, recognising misinformation, and exploring echo chambers and filter bubbles created by AIdriven algorithms. Students should be empowered to make informed decisions and recognise misinformation. Students must learn to foster community and social ties and have face-to-face interactions. Students should be nurtured with empathy.
Time management is equally necessary to be taught to the youth to ensure a controlled use of not only AI but also overall screen time. Mental health must be prioritized in education to recognise and manage anxiety and stress levels and to seek help if and when needed.
---
Community Building
Implementing mindfulness techniques and meditation, along with well-being programs, should be placed and easily accessible in educational and workplace institutions to promote mental health. This initiative should involve a digital detox by promoting and encouraging 'off-grid' time in a productive way to reduce connectivity overload. Along with benefiting mental health, these initiates should also foster community connections and social ties by approaching social anxiety caused by screen time isolation by identifying triggers and instructing and helping attain the coping mechanisms that are and must be 'offline' by involving and fostering art therapy, meditation, meet and greets, relaxation techniques, and other social and required guidance and skills.
---
V. NAVIGATING THE COMPLEX LANDSCAPE OF AI-DRIVEN PRESENT AND FUTURE
In the contemporary world, the influence of AI-driven globalization with the advancements in technology and the interconnectedness of the 'global village' has brought unprecedented opportunities and complex challenges. Throughout this discourse, it is understood that the addictive implications of the Casino Syndrome, along with its three tenets, are causing significant negative consequences. The paper has dissected the consequences and their nuances to potentially present the threats and remedies.
A dissection of the nuances of the Casino Syndrome and its impact can be understood on international, national, local, and individual levels. AI has cast nations into a rat race, especially the United States and China, which are competing for AI supremacy. This kind of competition often becomes hostile by going beyond its original technological trajectory. The world is witnessing technological warfare driven by the world's superpowers, whereas the developing nations, or so-called third-world nations, suffer under tight competition. The consequences of such warfare are far-reaching in terms of technology and economy, affecting millions of people apart from the active participants in the competition.
As companies amass fortunes of wealth, it is the working-class laborers who suffer. The fresh employment opportunities in AI primarily benefit those with a particular education and specialized skills, leaving behind those without such advantages. The scenario of AI professionals gaining lucrative job opportunities while others face job insecurity deepens income inequality, echoing the income disparities found within the Casino Syndrome. AI creates damage in interpersonal relationships as well, and it causes narcissistic tendencies by focusing too much on the individual. In the virtual world, people participate in curating content with precision, creating individual bubbles for every person, leading to negative effects on Classical liberalism and neoliberalism, concepts that have foregrounded capitalism, are at the very center of the capitalistic approach to globalization and globalization's approach to AI. Community building is ignored significantly, to the point that individuals either lose their cultural identity or have a fundamentalist reaction to it. The current world encourages individuals to compete against one another due to the intense professional race for employment.
Religion and culture have also been commercialized. Whereas lived experiences are becoming tech-savvy, individuals are unable to have proper communication as language is also affected. Eventually, familial bonds are harmed along with the gaping social divide and women's marginalization.
AI's impact on mental health has caused a steady rise in mental health issues such as anxiety, depression, and stress in youth. Technology is causing loneliness and social anxiety. Where students' critical thinking abilities are affected. Constant connectivity and information overload are overwhelming. Hyperreality is becoming the reality while ignoring the tangible reality, causing long-term mental health consequences.
Addressing the mental health challenges emanating from AI-driven globalization necessitates a multifaceted approach that encompasses ethical AI development, accountability, education, and awareness. To mitigate the harmful effects, ethical AI development must be a priority. This entails designing AI systems with the user and societal well-being at the forefront and finding the right balance between an individualistic approach and a community approach. Key factors include ethics, transparency, mitigation, awareness and education, community building, etc.
Preparing individuals with the skills and knowledge to navigate the digital age is crucial. Integrating digital literacy, media literacy, and mental health education into educational curricula empowers people to critically evaluate data, manage stress, and make informed decisions about their internet existence. Increasing awareness about AI-driven globalization's challenges and the "Casino Syndrome" empowers individuals to take proactive steps to address these problems.
Acknowledging the detrimental effects of hyperreality on mental health, efforts should focus on enriching resilience. Mindfulness and well-being programs can aid individuals in coping with stress and stimulating mental health. Fostering digital detox and reducing screen time helps establish a healthier equilibrium between technology and real-life experiences. Strengthening community bonds and social ties counters the isolation exacerbated by excessive screen time and virtual environments.
Conclusively, AI-driven globalization introduces a unique set of challenges. By proactively enforcing ethical AI development, improving accountability, prioritizing education and awareness, and fostering resilience, one can navigate this complex topography. This approach enables one to harness the benefits of AI-driven globalization while reducing its detrimental results. As one strives to strike a balance between the digital and the real, one can mold a future where AI-driven globalization enriches our lives. | 81,256 | 1,668 |
399ff44aac135853b22fd8811bc64afb81e66428 | Prevalence of maternal antenatal anxiety and its association with demographic and socioeconomic factors: A multicentre study in Italy | 2,020 | [
"JournalArticle"
] | Background. Maternal antenatal anxiety is very common, and despite its short-and long-term effects on both mothers and fetus outcomes, it has received less attention than it deserves in scientific research and clinical practice. Therefore, we aimed to estimate the prevalence of state anxiety in the antenatal period, and to analyze its association with demographic and socioeconomic factors. Methods. A total of 1142 pregnant women from nine Italian healthcare centers were assessed through the state scale of the State-Trait Anxiety Inventory and a clinical interview. Demographic and socioeconomic factors were also measured. Results. The prevalence of anxiety was 24.3% among pregnant women. There was a significantly higher risk of anxiety in pregnant women with low level of education (p < 0.01), who are jobless (p < 0.01), and who have economic problems (p < 0.01). Furthermore, pregnant women experience higher level of anxiety when they have not planned the pregnancy (p < 0.01), have a history of abortion (p < 0.05), and have children living at the time of the current pregnancy (p < 0.05). Conclusion. There exists a significant association between maternal antenatal anxiety and economic conditions. Early evaluation of socioeconomic status of pregnant women and their families in order to identify disadvantaged situations might reduce the prevalence of antenatal anxiety and its direct and indirect costs. | Introduction
Maternal antenatal anxiety and related disorders are very common [1,2], and despite it being frequently comorbid with [3,4], and possibly more common than, depression [1,5], it has received less attention than it deserves in scientific research and clinical practice. Moreover, parental prenatal complications can interfere with the parent-child relationship, with the risk of significant consequences over the years for the child's development [6,7]. From a clinical point of view, this is a considerable omission given the growing evidence that antenatal maternal anxiety can cause adverse short-term and long-term effects on both mothers and fetal/infant outcomes [8][9][10][11][12][13][14][15][16], including an increased risk for suicide and for neonatal morbidity, which are associated with significant economic healthcare costs [17]. The prevalence of anxiety during pregnancy is high worldwide (up to approximately 37%); however, in low-and middle-income countries, it is higher than in high-income countries [1,2], with heterogeneity across nations with comparable economic status.
Several studies have investigated the relationship between demographic and socioeconomic risk factors with antenatal anxiety [2,18]. The results showed that several demographic (e.g., maternal age) and socioeconomic factors (e.g., employment, financial status) were associated with differences in the prevalence of anxiety symptoms or disorders, but the results are equivocal. However, both the prevalence and the distribution of these protective and risk factors may change over time, especially in a period of major socioeconomic change [19,20], such as the global economic crisis beginning in 2008, which led to the increased consumption of anxiolytic drugs and antidepressants with anxiolytic properties [21], to a decline in the number of births [22] and to impaired development in medical, scientific, and health innovations [23] that, in the next few years, could reduce the availability of help for families and health services [24]. However, despite the recently available and growing research evidence highlighting the need for early identification [25] and prompt treatment of maternal anxiety during both pregnancy and the postpartum period, anxiety remains largely undetected and untreated in perinatal women in Italy.
The aims of this study were (a) to assess the prevalence of state anxiety in the antenatal period (further stratified by trimesters) in a large sample of women attending healthcare centers in Italy and (b) to analyze its association with demographic and socioeconomic factors.
---
Methods
---
Outline of the study
The study was conducted as part of the "Screening e intervento precoce nelle sindromi d'ansia e di depressione perinatale. Prevenzione e promozione salute mentale della madre-bambino-padre" (Screening and early intervention for perinatal anxiety and depressive disorders: Prevention and promotion of mothers', children's, and fathers' mental health) project [26] coordinated by the University of Brescia's Observatory of Perinatal Clinical Psychology and the Italian National Institute of Health (Istituto Superiore di Sanità, ISS). The main objectives of this Italian multicenter project were to apply a perinatal depression and anxiety screening procedure that could be developed in different structures, as it requires the collaboration and connection between structurally and functionally existing resources, and to evaluate the effectiveness of the psychological intervention of Milgrom and colleagues [27][28][29] for both antenatal and postnatal depression and/or anxiety in Italian setting. The research project was assessed and approved by the ethics committee of the Healthcare Centre of Bologna (registration number 77808, dated 6/27/2017).
---
Study design and sample
We performed a prospective study involving nine healthcare centers (facilities associated with the Observatory of Perinatal Clinical Psychology, University of Brescia, Italy) located throughout Italy during the period, March 2017 to June 2018. The Observatory of Perinatal Clinical Psychology (https://www. unibs.it/node/12195) coordinated and managed the implementation of the study in each healthcare center. Only cross-sectional measures were included in the current analyses because screening for anxiety was carried out at baseline. The inclusion criteria were as follows: being ≥18 years old; being pregnant or having a biological baby aged ≤52 weeks; and being able to speak and read Italian. The exclusion criteria for baseline assessment were as follows: having psychotic symptoms, and/or having issues with drug or substance abuse.
---
Data collection
Each woman was interviewed in a private setting by a female licensed psychologist. All psychologists were trained in the postgraduate course of perinatal clinical psychology (University of Brescia, Italy) and were associated with the healthcare center. All the psychologists also completed a propaedeutic training course for the study, developed by the National Institutes of Health, on screening and assessment instruments and on psychological intervention [30]. The clinical interview was adopted to elicit information regarding maternal experience with symptoms of stress, anxiety, and depression. All women completed the interview and completed self-report questionnaires.
---
Instruments
---
Psychosocial and Clinical Assessment Form
The Psychosocial and Clinical Assessment Form [31,32] was used to obtain information on demographic and socioeconomic characteristics. In this study, the following demographic variables were considered: age, marital status, number of previous pregnancies, number of abortions, number of previous children (living), planning of the current pregnancy, and use of assisted reproductive technology. The socioeconomic variables were educational level, working status, and economic status.
---
State-Trait Anxiety Inventory
Given that the assessment of mental diseases, including antenatal diseases, is based primarily on self-perceived symptoms, evaluating these data using valid, reliable, and feasible self-rating scales can be useful. The state scale of the State-Trait Anxiety Inventory [33][34][35] was used to evaluate anxiety. It is a self-report questionnaire composed of 20 items that measure state anxiety, that is, anxiety in the current situation or time period. The possible responses to each item are on a 4-point Likert scale. The total score ranges from 20 to 80, with higher scores indicating more severe anxiety. This instrument is the most widely used tool in research on anxiety in women in the antenatal period [1,36]. The construct and content validity of the STAI for pregnant women has been proven [37,38].
---
Procedures
Women who met the inclusion criteria were approached by one of the professionals affiliated with the healthcare center and involved in the research when they attended a routine antenatal appointment. They received information about the content and implications of the study. Future mothers who signed the informed consent document completed the questionnaires and then underwent an interview with a clinical psychologist.
---
Statistical analysis
All variables were categorized. A statistical analysis that included descriptive and multiple logistic regression models was performed. For descriptive analyses, frequencies and percentages were calculated for categorical variables, and the Chi-square test was utilized for comparisons. The logistic regression model was used to evaluate the associations between the demographic and socioeconomic variables and the risk of antenatal anxiety. In the analytic models, each demographic and socioeconomic variable was included both individually and together. All analyses were performed using the Statistical Package for Social Science (SPSS) version 25.
---
Results
---
Subjects
To estimate the minimum sample size, we relied on three studies [39][40][41], indicating that it was necessary to enroll 296 patients. However, our main aim was to recruit as large a sample as possible to promote perinatal mental health; thus, at the end of the 1-year recruitment period, we enrolled more mothers. Among the 2096 women invited to join the study, 619 (29.5%) refused, mainly due to lack of time, personal disinterest in the topic, and the conviction that they are not and never will become anxious or depressed. Therefore, the total study sample consisted of 1,477 women. Of these, 28 women did not complete the anxiety questionnaire. Thus, the sample includes 1,142 pregnant women and 307 new mothers. Given the aims of this study, only pregnant women were included in the current statistical analysis. Table 1 presents the list of the healthcare centers in which the pregnant women were recruited. Table 2 presents demographic and socioeconomic characteristics, along with an estimation of the relative risk of anxiety through both bivariate and multivariate analyses.
---
Prevalence of antenatal state anxiety
The prevalence of anxiety (Table 3) was 24.3% among pregnant women. A further division into 13-week trimesters was applied, showing that the prevalence of antenatal anxiety was high (36.5%) in the second trimester and then decreased in the third and last trimesters of pregnancy.
Bivariate analyses (Table 2) showed a significantly higher risk of anxiety in pregnant women who have a low level of education (primary or semiliterate) (p< 0.01), who are jobless (i.e., student, homemaker, or unemployed) (p< 0.01), and who have economic problems (p< 0.01). Furthermore, during the antenatal period, women experienced a higher level of anxiety when they had not planned the pregnancy (p< 0.01), did not resort to assisted reproductive technology (p< 0.05), had a history of abortion (p< 0.05), and had children living at the time of the current pregnancy (p< 0.05).
The adjusted logistic regression analysis (see Table 2) showed that pregnant women with a high (university or secondary) educational level (Exp B = 0.60), temporary or permanent employment (Exp B = 0.64), and, in particular, either a high economic status or few economic problems (Exp B = 0.58) showed a reduction in the risk of antenatal anxiety by almost half. Furthermore, a similar reduction in risk was observed in women who had planned for their pregnancy (Exp B = 0.57).
---
Discussion
This study is one of the largest to evaluate the prevalence of anxiety during pregnancy in a sample of women attending healthcare centers in Italy. In general, the fact that the demographic data of participants in this study are comparable to those from populationbased epidemiological studies [42] indicates that our results are representative of the overall population of pregnant women in Italy. Our findings are in line with the prevalence in a previous Italian study [43] and the overall pooled prevalence for self-reported anxiety symptoms of 22.9% reported in a recent systematic review and meta-analysis [1]. Similarities in the prevalence of maternal antenatal anxiety remain regardless of which diagnostic tool was used. Regarding the use of the STAI in this study, it should be noted that it is the most widely used self-reporting measure of anxiety. Furthermore, its criterion, discriminant and predictive validity [44], and ease of use can provide a reasonably accurate estimate of prevalence, and its widespread use in research studies [1,16] can enable more accurate comparisons among nations.
With regard to the trimestral prevalence of antenatal anxiety, our study found that the prevalence of anxiety was highest during the second trimester. This observation is inconsistent with the results from a recent meta-analysis [1] that found that the prevalence rate for anxiety symptoms increased progressively from the first to the third trimester as the pregnancy progressed. However, it should be noted that the results regarding the monthly/trimestral/ semestral prevalence of perinatal anxiety were not univocal in all studies [1,2].
Our study shows that having a low level of education, being jobless, and having financial difficulties are three crucial predisposing factors of anxiety in pregnant women. These associations are clearly consistent with previous studies that found that antenatal anxiety was more prevalent in women with low education and/or low socioeconomic status (e.g., unemployment, financial adversity) [45][46][47][48][49] and might be related to the global economic crisis that currently affects, especially, southern nations [50]. Studies conducted in developing countries, where low education and low socioeconomic status are both present, highlight the association with prenatal anxiety [51][52][53].
Furthermore, consistent with previous studies, our results show that antenatal anxiety is more prevalent in women who have unplanned pregnancies [43,54] and who have living children at the time of the current pregnancy [55]. We assume that the reasons for these associations most likely concern the costs associated with raising one or more children, especially when the (new) child is unplanned. This interpretation finds support in the results from previous studies, showing that low income, unemployment, and financial adversity [2] are related to higher levels of antenatal anxiety symptoms. Moreover, it would also explain why resorting to assisted reproductive techniques, which in Italy requires financial resources, was not a risk factor.
Our findings regarding the association between ongoing economic hardships or difficulties and antenatal anxiety can be particularly important in light of the short-and long-term adverse impacts of the coronavirus disease 2019 (COVID-19) pandemic and restrictive measures adopted to counteract its spread [56,57]. Indeed, the COVID-19 outbreak has significantly impacted European and global economies both in the short term and in the coming years [58,59]. Furthermore, as shown by general population surveys, social isolation related to the COVID-19 pandemic is associated with a wide range of adverse psychological effects, including clinical anxiety and depression and concern about financial difficulties [60,61], which can persist for months or years afterward, as indicated by the literature on quarantine [62]. A vulnerable population, such as women in the perinatal period, may be among the individuals who are most affected.
---
Clinical Impact
Our findings suggest that screening for early detection of antenatal anxiety (as well as depression, which is frequently comorbid with anxiety [3,4]) is recommended for all pregnant women, but especially for those who have a poor level of education and financial difficulties. Early detection and diagnosis will enable psychological and, where appropriate, pharmacological treatment in the health services to prevent anxiety complications in both these women and their children.
---
Limitations
Three main limitations of this study should be noted. First, a crosssectional approach to antenatal anxiety does not allow us to fully explore whether and what factors may predict persistent anxiety symptoms beginning during pregnancy and progressing to postpartum. Second, the size of the sample during the first trimester of pregnancy was too small to draw any conclusions. Finally, the rates of diagnosis of any anxiety disorder in our sample were not assessed.
---
Conclusions
There is a significant association between maternal antenatal anxiety and economic conditions. The aftermath of the great recession of 2008-2009 and the ongoing economic impact of the COVID-19 pose a serious problem for women and their families. With the present historical and economic background in mind, our findings would allow us to hypothesize that early evaluation of the socioeconomic status of pregnant women and their families to identify disadvantaged situations might reduce the prevalence of antenatal anxiety and its direct and indirect costs. In this sense, our findings may give Italian health policy planners useful information to develop new cost-effective antenatal prevention programs focused on socioeconomically disadvantaged families. Furthermore, we believe that our results will serve as a baseline for future comparisons between nations inside and outside the European Union, as well as for new studies on the protective and risk factors related to perinatal anxiety in those nations.
---
Data availability statement. The complete dataset is available from the corresponding author upon request.
---
Conflict of interest. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. | 16,748 | 1,421 |
9dfbdd940733bbb172e88efb82a72a5e329d8c7f | Modelling transitions between egalitarian, dynamic leader and absolutist power structures | 2,022 | [
"JournalArticle"
] | Human groups show a variety of leadership dynamics ranging from egalitarian groups with no leader, to groups with changing leaders, to absolutist groups with a single long-term leader. Here, we model transitions between these different phases of leadership dynamics, investigating the role of inequalities in relationships between individuals. Our results demonstrate a novel riches-to-rags class of leadership dynamics where a leader can be replaced by a new individual. We note that the transition between the three different phases of leadership dynamics resembles transitions in leadership dynamics during the Neolithic period of human history. We argue how technological developments, such as food storage and/or weapons which allow one individual to control large quantities of resources, would mean that relationships became more unequal. In general terms, we provide a model of how individual relationships can affect leadership dynamics and structures. | Introduction
There are many different types of leadership dynamics found in human societies. For instance, many small groups, such as private companies, have a permanent leader. In other groups the leader changes over time, such as in university departments or social societies. These patterns are also seen at larger scales. During the decades before and after Julius Caesar crossed the Rubicon in 49BCE, the Roman Republic transitioned from a system of Consuls who held power for only one year, to the Roman Empire where there was a single Imperator Caesar who ruled for life and passed the title to a chosen successor.
A notable feature of many human groups is that they often do not explicitly coerce their members to join a hierarchy. Instead, soft power and prestige play a strong role [1,2], with status being an abstraction of more tangible material resources such as land, food, weapons or other commodities [3]. Status is then voluntarily conferred upon leaders by their allies [2][3][4][5][6][7][8][9], with many of these relationships being asymmetric [2,5,[9][10][11]. The member with the highest status is usually deemed the leader [2,4,6,8], creating hierarchical societies. Questions remain as to which factors determine why some groups have no leader, others have transient leaders and yet others have relatively permanent leaders.
Considering society at a large scale, we observe a shift between different forms of leadership dynamics in evidence from the Neolithic Era. Before this era, human societies consisted of egalitarian hunter-gatherer groups where material resources such as food were shared relatively equally [12] and leadership roles were facultative and of a temporary duration. There followed a transition to sedentary groups where high-status individuals had more resources but leaders still changed relatively regularly [13]. Finally, hereditary leadership became institutionalised, where the role of a chief was passed down a paternal line, which monopolised most of the resources [14].
Previous work has argued that these shifts were due to social and technological developments, which meant that interactions between individuals became increasingly asymmetric. These asymmetries were likely due to control of agricultural surpluses [15][16][17][18], land [15], ideologies [19], or military units and weapons [14]. In light of this evidence, the model we present here investigates how asymmetry in status interaction can generate the different classes of leadership dynamics observed during the Neolithic Era.
Network analysis has proved to be a useful approach for studying the interactions of members of a population [3,7,20]. Quantitative study of hierarchical networks is usually static in nature and networks are presented as snapshots in time [21,22]. However, when using the nodes of a network to represent individuals, the properties of the nodes are often in flux, and the connections between nodes change over time. When there is a feedback effect between node properties and node-to-node connections, the network is said to be coevolutionary. These coevolutionary networks can generate complex dynamics [23,24].
In order to investigate the factors underlying different types of leadership dynamics, we present a dynamic coevolutionary network which incorporates the status of individuals as properties of the nodes. We take status to represent the control of tangible and intangible resources such as food, land, money or other assets, or authority. An edge on the network represents a relationship between two individuals, over which exchanges of status are made. An exchange of status might be the trade of goods or services, an employment contract, or a political endorsement. An important factor of our model is the concept that many of the trades in a relationship are somewhat unequal, both in the absolute value assigned to each partner, and the relative value to each partner. Based on this we also specify rules for how edges between nodes are rewired so that individuals can maximise the status they receive. Given these rules, we allow the network to evolve over time and observe its dynamics.
---
Model
The model consists of a dynamic network of n nodes which represent individual people. All individuals are considered to be identical and are unable to coerce one another to form relationships. Leadership among individuals is solely determined by status. Each individual in the model has a status level which depends on their relationships with others, meaning that status is adjusted according to the status of those who they are linked to. Individuals distribute a proportion of their status amongst those they are linked with, and may not expect the same quantity of status in return. Individuals can change who they associate with according to the marginal utility of the relationships. Each individual's node i maintains a status s i , which translates to how much influence they have within the group. Status acts as a multivariate aggregate of an individual's level of money, prestige (titles, jobs, etc), and ownership (land, valuable resources, etc). For simplicity, we assume that individuals must maintain a fixed number of necessary relationships, which is constant for all individuals. These relationships might be needed to participate in society, providing land to live on or food to survive the winter. In our model, each node is assigned λ unidirectional outgoing edges which represent their relationships and are linked to other nodes. Nodes can have any number of incoming edges from others in the network.
The statuses of the nodes are updated according to their edges in the status update stage, and nodes may rewire an edge in the rewiring stage. The model is run forward in time to observe the distribution of status and changes of that distribution amongst the nodes. We concentrate on looking for leader individuals with nodes of high status and check to see whether they were superseded by other leaders. Models are run until patterns of leadership dynamics stabilised, or for a substantially long time (up to 5 million time steps) to confirm that there is an extremely low likelihood of a new leader rising to high status.
---
Status update stage
In the model, a proportion of r of each node's status is distributed amongst each of its edges (including both incoming and outgoing edges). This formalisation of sharing status amongst edges is based on Katz's prestige measure [25]. For each edge, we assign a temporary status value. This is calculated by adding the status contributions from both of the nodes that are linked. To model unequal relationships we introduce the inequality parameter (q) which unequally reassigns the edge's status back to those joined by the edge. In this formulation, the total amount of status in the model is constant. The steps are done in the following order at time t:
1. Each unidirectional edge (i ! j) is assigned a temporary status value: e i!j (t) = rs i (t)/k i + rs j (t)/k j , where k i is the degree of node i including both incoming and outgoing edges. For an example, see Fig
2. Each node deducts the status distributed to its edges:
s i (t + 1) = (1 -r)s i (t).
3. The status of each edge is redistributed back to the nodes: 8e i!j , s j (t + 1) = s j (t + 1) + qe i!j and s i (t + 1) = s i (t + 1) + (1 -q)e i!j
---
Rewiring phase
In order to maximise status, each individual determines which of its outgoing relationships is of the least value, and, with probability w chooses a new relationship according to the following rules.
1. For node i at time t we identify the edge of minimum value (i ! j � ) from the node's outgoing edges (i ! j), such that e i!j
� (t) = min[e i!j (t)], 8j. Fig 1.
Example of how the status value of edges is calculated. In this example, s 1 will receive 0.03(1 -q) status from the edge and s 2 will receive 0.03q status.
https://doi.org/10.1371/journal.pone.0263665.g001
2. With probability w, we rewire the edge to a new node by choosing a random node z such that there is no edge (i ! z). Delete edge (i!j � ) and add edge (i ! z).
3. With probability (1 -w), we do nothing.
---
Results
We will present our analysis of the dynamics that result from the interplay between the processes we have defined. Simulations were run of the model choosing parameter values to explore their effects on dynamics over the extremes of their ranges. Depending on the parameters, we either observe relatively equal statuses among the population, or a relatively high status level for one or a few individuals' nodes. An example of a typical network with a single dominant individual can be seen in Fig 2.
---
Inequality in relationships affects leadership dynamics
A key parameter in the model is the inequality parameter (q), which models an unequal transfer of status from a relationship originator to the receiver. As we increase q, we observe different phases of dynamics in the model, which are shown in Fig 3 . We dub the individual whose node has the highest level of status as the leader. The model exhibits three different types of leadership dynamics: No leader, transient leader(s), and permanent leader(s).
---
Exploration of a broader range of parameters
We find our simulations demonstrate all three phases of leadership dynamics over a wide range of parameters, including the population size, and numbers of edges. To show this, we run simulation models for each parameter set and record the number of times over the simulation there is a change of individual with the highest status. We find a similar pattern across the parameters tested (see S1-S9 Figs) to that shown in Fig 3 . At lower values of q there is a very fast turnover of the highest-status individual. As q is increased, we find a transient phase where new leaders emerge, but there is still turnover of leaders. At higher values of q there are very few new leaders. When leaders are stable, we observe that the number of stable high-status leaders with high levels of q is equal to λ -1. We also found that the number of relationships per individual has an impact on the transitions between phases. As that parameter increases, First there is effectively no leader at all (panel A). Then we see that a single individual can rise to a high leadership status, but this is transient and leaders are replaced by other individuals (panel B). The length of time that individuals stay as leader then increases as we increase q until the leader is effectively permanently in charge (panel C). In the next phase, a second individual can rise to a high status alongside the first leader, but these individuals' leadership position is transient (panel D). Finally, two individuals share leadership status and remain so permanently (panel E). The value of q is shown, other parameters are r = 0.2, n = 50, λ = 3, w = 0.5.
https://doi.org/10.1371/journal.pone.0263665.g003
we observe how transitions between the three different types of leadership dynamics start to occur at lower values of q (see S1-S9 Figs).
---
Transient leader phase demonstrates a power vacuum
An interesting phase in the dynamics demonstrates transient leaders (Fig 3 ,panel B). In this case, at any particular time-point in the simulation, there is only one high status leader. This leader can lose status in a riches-to-rags event, but another quickly replaces it. We have produced a video animation of the model of this phase of the model which is available in the S1 video. We ran simulations over a range of values for the inequality parameter (q) in Fig 4 which shows how there are ranges of the inequality parameter (q) where leader turnover is relatively high, but the number of leaders at any particular time-point is relatively constant, thus demonstrating a power-vacuum effect in our model.
---
Distribution of status and node degree
We find heavy-tailed distributions of node status and node degrees in our network model (Fig 5). We looked more closely at the distribution of node degrees for q = 0.525 using the Python Powerlaw package [26,27]. As the parameters of our model stipulate that all nodes have at least degree 3, it makes sense to set a lower bound for the distribution we investigate, which we set to X min = 6. A likelihood-ratio test [26] is used to compare the goodness of fit between the power-law distribution and two other distributions. We found no evidence (p � 10 -100 ) for either a log-normal or an exponential distribution compared with the power-law distribution (exponent of P(x) / x -8 ). The relatively small range for the distribution is unusual for a power law and this is only found at a localised parameter value, but it does indicate that the distributions we find are unlikely to be explained by a simple log-normal or exponential distribution.
Considering the distribution of node statuses, there is a cusp point of s � 3.0 (Fig 5, panels C and D), where the frequency of nodes with s > 3.0 stops decreasing or starts to level off, which justifies our choice for using this as a threshold for defining leaders in Fig 4.
---
Shifts in leadership dynamics are consistent with the Neolithic
In the introduction we argued that shifts in human leadership dynamics were due to technological advances that allowed individuals to control greater pools of resources. These advances would have had the effect of increasing the inequality parameter as seen in Fig 3 . The analysis in that figure was done with a relatively high rewiring rate at the same frequency as status update. In human relationships, the rate at which relationships are changed is often at a relatively low frequency compared to how often status changes. For instance, new contracts take months to draw up but money and goods may change hands quite frequently. Adjusting the parameters, we can generate leadership dynamics over a broad range of timescales. We have selected one which is consistent with timelengths observed in the Neolithic era (see Fig 6).
---
Discussion
The model presented here demonstrates three different phases of leadership dynamics: a phase with no leader, a phase with changing leaders, and a phase with a constant leader or leaders. Which phase is present in the network depends on the inequality of relationships between individuals. This demonstrates how different leadership dynamics seen in human societies can be due to how status is transferred between the individual members of the society. This suggests that self-organisation of social norms around inequality can play a role in keeping a system parameter near to a critical point where leadership changes relatively frequently. This work demonstrates a dynamic hierarchy in human networks where all individuals are have equivalent traits and fitness. Our model is a form of preferential-attachment, where nodes are more likely to connect to other nodes which are already of high status [22]. However, this is usually applied to growing networks [28,29], and once one individual gains As q is increased the distribution become increasing skewed. At higher values of q, the number of nodes becomes a factor with a second hump visible on the right-hand-side of both distributions. We can see how the rewiring of edges to an extra leader between q = 0.54 and q = 0.55 (see Fig 4 panel A) suppresses the frequencies of nodes with middling status or node-degree as q is increased. Parameters are the same as in Fig 3, q as shown. Simulations were run for 2 million time steps. https://doi.org/10.1371/journal.pone.0263665.g005 leadership, it is unlikely to change. In an alternative model, new individuals may dominate if they have a fitness advantage [30,31]. Our model presents a riches-to-rags alternative where a high-status individual can lose status. In our model, we see how nodes have high numbers of connections (relationships) at some points and then other nodes take over. We find that the predicted exponent of our power law distribution is higher than that found in some friendship networks [22]. However friendship networks are only one type of relationship, and humans can relate to each other in many different ways. An example being where a chieftain controls access to food. Further study is needed to investigate how a model like ours can be challenged against empirical data.
Evidence for human societies with dynamic leaders during the Neolithic transitions [13] is consistent with the dynamic leader phase of our model. There is a transition between three phases of leadership dynamics in human societies from relatively egalitarian power structures, through a period where leaders change over time, to dominant institutionalised leaders [13]. Our model can be interpreted as a conceptual model for these leadership dynamics. Many have argued that control of surplus physical resources such as food and land, or intangible resources such as religious authority, can play an important role in promoting individuals to leadership rank [15,19,32,33]. Having a surplus means an individual is able to form relationships where they need only exchange a small proportion of their resources, while their partners must exchange a larger proportion. Such inequality can be further exacerbated by scarcity of resources created by high population density [34]. This form of inequality is modelled by the level of the inequality parameter (q) in our model. Interestingly, our results present an alternative to this picture, suggesting that increases in the numbers of relationships per individual might also play an important role in creating conditions for absolutist power structures. More than one factor may have played a role in the transitions in leadership structure that happened during the Neolithic.
The three phases of human leadership dynamics correspond to three phases identified in the organisational psychology literature. Lewin has identified three modes of leadership: Laissez Faire, Democratic and Autocratic [35]. These three modes largely correspond to the three phases of leadership dynamics found in our model. Lewin's study linked increasing control of central resources to more Democratic and Autocratic modes. A controlled surplus of this central resource enables a leader to pay off many individuals and maintain their leadership [36]. This reflects an inequality of alliances which is key to our model.
A interesting feature of our model is that it demonstrates heavy-tailed distributions of status and node-degree. Many systems are known to demonstrate such heavy-tailed distributions when they are at a critical point [37], i.e., when the rate-of-change of a variable is close zero. Further analysis of our model in the Supporting Information, which assumes that edge-rewiring is relatively slow compared to status update, shows an expected rate-of-change of node degree close to 0.0 when q � 0.5. This suggests our model reaches a critical point, but further work is needed to investigate this in more detail.
The work we have presented has some limitations. The model we have presented is complex and difficult to analyse. Future models will hopefully simplify our approach while maintaining the interesting dynamics of changing leaders we have found in the model. Other models could add more realism, incorporating mortality of individuals and inheritance, or varying the numbers and types of relationship between individuals. Finally, it is important to find methods for challenging leadership models against data.
In this paper we focused primarily on applying this model to the development of insights regarding the Neolithic transitions from flat power structures to hierarchical societies. Future work can build upon these foundations to examine whether this model can be applied to other changes in societal structure, such as the movements from monarchy toward parliamentary democracies in 18th-century Europe, or a detailed study of the transitions of Roman civilization between various different structures including monarchy, through annually electing two concurrent consuls in the Roman Republic, a phase with three 'Triumvirate' leaders, to a single Imperator Caesar in the Roman Empire. As well as human societies, this theory can be of value to studying hierarchies in animal societies [38]. Other work might investigate the impact of relaxing some of our assumptions. For instance, exploring different rewiring rules where nodes have different numbers of edges, or rewire to others based on a similar or higher levels of status or numbers of edges. The model can also be extended in various ways to better represent the real-world contexts in which leadership dynamics operate; these could include representations of technological innovations, changes in social norms, or power struggles between potential leaders. These extensions would enable us to develop the model further into a powerful exploratory tool for human leadership dynamics. As we increase q, leaders have increased time of leadership, at around 10 4 the average leader has quite a long period with the highest status but there is still a large turnover. On the right side there are very few leaders in the chart and we see a single leader or several leaders. Parameters: w = 0.1, n = 1000, and as shown in the figure. As we increase q, leaders have increased time of leadership, at around 10 4 the average leader has quite a long period with the highest status but there is still a large turnover. On the right side there are very few leaders in the chart and we see a single leader or several leaders. Parameters: w = 1.0, n = 1000, and as shown in the figure .
(PNG)
---
There are no empirical data associated with this manuscript. The underlying code used to generate the results can be found at https://github.com/johnbryden/ PrestigeModel.
---
Author Contributions
Conceptualization: John Bryden. | 21,827 | 961 |
adc71ac811d356042869ac463959708d06a46d8d | Fatal child maltreatment associated with multiple births in Japan: nationwide data between July 2003 and March 2011 | 2,013 | [
"JournalArticle"
] | Objectives The purpose of the present study is to clarify the impact of multiple births in fatal child maltreatment (child death due to maltreatment). Methods The national annual reports on fatal child maltreatment, which contain all cases from July 2003 to March 2011, published by the Ministry of Health, Labor and Welfare of Japan, were used as the initial sources of information. Parent-child murder-suicide cases were excluded from the analyses. Multiple births, teenage pregnancy and low-birthweight were regarded as the exposed groups. The relative risks (RRs) and their 95 % confidence intervals (CIs) were estimated using the data from the above reports and vital statistics. These analyses were performed both including and excluding missing values. Results Among 437 fatal child maltreatment cases, 14 multiple births from 13 families were identified. The RRs of multiple births per individual were 1.8 (95 % CI 1.0-3.0) when including missing values and 2.7 (95 % CI 1.5-4.8) when excluding missing values. The RRs of multiple births per family were 3.6 (95 % CI 2.1-6.2) when including missing values and 4.9 (95 % CI 2.7-9.0) when excluding missing values. The RR tended to be much lower than the RR of teenage pregnancy (RR 12.9 or 22.2), but slightly higher than the RR of low-birthweight (RR 1.4 or 2.9). Conclusions Families with multiple births had elevated risk for fatal child maltreatment both per individual and per family unit. Health providers should be aware that multiple pregnancies/births may place significant stress on families and should provide appropriate support and intervention. | Introduction
Multiple births are thought to be a risk factor for child maltreatment [1][2][3][4]. These earlier studies, however, were performed two to three decades ago, and were not necessarily population-based. It is of little doubt that the current conditions surrounding families, for example, family planning, child rearing practices and maternal/paternal age, are quite different from those prevalent at that time. Recently, family size has rapidly become smaller, maternal and paternal ages at first childbirth are becoming higher, and assisted reproductive technology has spread widely in Japan [5].
Nevertheless, very few population-based data on the relationship between child maltreatment and multiple births are available. In an intensive literature search, the present author could find no report on this topic other than the earlier studies mentioned above. One possible reason is that prospective epidemiologic research on child maltreatment is very difficult due to the underreporting of abuse cases. It has long been believed in Japan that the frequency of child maltreatment in cases of multiple births is around 10-fold higher than among singletons according to the only hospital-based report done in Japan, authored by Tanimura et al. [4]. The purpose of the present study is to clarify the impact of multiple births on fatal child maltreatment using nationwide data.
---
Materials and methods
---
Subjects
National annual reports on fatal child maltreatment (the first to eighth reports) published by the Ministry of Health, Labor and Welfare of Japan (in Japanese) were used as the initial sources of information for the present secondary data analyses. All cases of fatal maltreatment of children from 0 to 17 years of age between July 2003 and March 2011 were reported.
Fatal child maltreatment was defined as child death due to maltreatment. The definitions of maltreatment and parental guardian were based on the Child Abuse Prevention Law of Japan executed in 2000. The types of maltreatment included physical abuse, psychological abuse, neglect and sexual abuse. The annual report tallied the cases of fatal child maltreatment according to whether the deaths were based on parent-child murder-suicide or not. Cases of parent-child murder-suicide were excluded from the present analysis, since the background and potential risk factors may be quite different from those in cases of fatal child maltreatment without suicide.
The numbers of women exhibiting any of about 20 physical and mental issues during pregnancy and the perinatal period were surveyed via questionnaire for the local public authorities and the results were presented in the annual reports. The reported number of women with each issue do not necessarily show that that particular issue is a real risk factor for fatal child maltreatment, since the frequency of each issue in the unexposed population or general population was not taken into consideration in the report. These data on physical and mental issues were not presented according to the ages of the victims. One limitation of this retrospective questionnaire survey is that there were many missing values among these data.
---
Statistical analyses
Multiple births, low-birthweight (\2,500 g) and teenage pregnancy were the only variables for potential risk factors, the numbers of which in the general population at birth could be estimated using vital statistics. The author substituted childbirth below the maternal age of 20 in the vital statistics for teenage pregnancy.
The relative risks (RRs) and their 95 % confidence intervals (CIs) in cases of fatal child maltreatment related to multiple births were estimated using fatal maltreatment data and vital statistics. The RRs of teenage pregnancy and low-birthweight were also calculated to clarify the relative impact of multiple births on fatal child maltreatment. The data on multiple births and low-birthweight were presented in all eight reports, and teenage pregnancy was tracked beginning with the third annual report. The information on the missing values of teenage pregnancy, low-birthweight and multiple births were presented after the second report.
The RR was calculated as the ratio of the incidence in the exposed population to that in the unexposed population according to the definition. Multiple births, teenage pregnancy and low-birthweight were regarded as risk factors against singleton births, non-teenage pregnancy and nonlow-birthweight, respectively. The analyses were performed using the concept of the birth-year cohort. For example, the incidence in multiple births was calculated as the number of multiple births cases with fatal child maltreatment divided by the person-years of the birth-year cohort of the general multiple births population in the reported period (between July 2003 and March 2011). The incidence in singletons was calculated in the same manner. There were no data on the number of multiple births, birthweight or maternal age for children from one to 17 years of age in the vital statistics. It was assumed that the percentage of the exposed population in the total general population at birth was constant for children from one to 17 years of age. For example, the percentage of multiple births in 2003 was used as the percentage of multiples of 1 year of age in 2004, 2 years of age in 2005, and so on. For the general population data, vital statistics from 1986 to 2011 were used, considering the year of the annual report and the age of the victims. Theoretically, the victims of 17 years of age in the first report (published in 2005) were born in 1986, and the victims of 0 years of age in the eighth report (published in 2012) were born in 2011. The follow-up period of the birth-year cohort was adjusted for the years 2003 and 2011 according to the research period (6 months and 3 months, respectively). The follow-up period of multiple births was distributed from 0.125 years (2011 cohort) to 6.25 years (1994-2004 cohorts) according to the birth year. Then the RR was calculated as the ratio of the incidence in the multiple births population to that in the singleton population. The RRs of low-birthweight and teenage pregnancy were calculated in the same manner.
Regarding multiple births, all families with at least two live multiple births were recalculated using vital statistics of live births/stillbirths combination. The RR and 95 % CI of multiple births were calculated per child unit (multiples as an individual child) and per family unit (families with multiples). When calculating RR per family, the total number of families was adjusted by considering the numbers of families with multiples.
These analyses were performed both including and excluding missing values, since a very high number of missing values was expected. Missing values were treated as unexposed cases when missing values were included.
---
Results
The total number of cases of fatal child maltreatment in the reported period was 437. The total number of person-years for children aged 0-17 years between July 2003 and March 2011 was estimated to be 159,550,946 in Japan. The estimated mortality rate due to maltreatment of children aged 0-17 years was 0.27 per 100,000 person-years.
The percentages of missing values for multiple births, low-birthweight and teenage pregnancy were 39.1 % (=161/412), 50.5 % (=208/412) and 35.1 % (=127/362), respectively. Among cases of fatal child maltreatment, 14 multiple births were identified from 13 families.
The RRs and their 95 % CIs are shown in Table 1. All RRs were statistically significant regardless of the risk factors and estimation methods, and were strongly influenced by the inclusion/exclusion of missing values. The RRs of multiple births per individual were 1.8 (95 % CI 1.0-3.0) when including missing values and 2.7 (95 % CI 1.5-4.8) when excluding missing values. The RRs of multiple births per family were 3.6 (95 % CI 2.1-6.2) when including missing values and 4.9 (95 % CI 2.7-9.0) when excluding missing values. The RR tended to be much lower than the RR of teenage pregnancy, (RR 12.9, 95 % CI 9.7-17.0 when including missing values, RR 22.2, 95 % CI 16.6-29.8 when excluding missing values) but slightly higher than the RR of low-birthweight (RR 1.4, 95 % CI 1.1-1.9 when including missing values, RR 2.9, 95 % CI 2.0-4.0 when excluding missing values).
---
Discussion
According to the world report by UNICEF [6], the maltreatment death rates of children under the age of 15 ranged from the lowest rates of 0.1-0.2 to the highest rate of more than 2.0 (per 100,000 person-years) in the richest 27 countries in the 1990s. The estimated mortality rate due to maltreatment of children under the age of 15 years was 0.32 (=422/130,716,055) per 100,000 person-years in the present study. It should be noted that this mortality rate does not include parent-child murder-suicide cases. If murder-suicide cases were included, the mortality rate is nearly 0.55 (=723/130,716,055), thus demonstrating that there is no serious underreporting of fatal child maltreatment in the present data.
On the other hand, the incidence rate of total, including nonfatal, child maltreatment was difficult to estimate. One possible estimate can be made as follows. According to the report by the Ministry of Health, Labor and Welfare of Japan (in Japanese), the number of individuals using the listening and support services for child maltreatment in 206 The present data showed that families with multiple births had an increased risk of fatal child maltreatment. The RR, however, was not higher than the RR of teenage pregnancy. The results also showed that the RR of multiples per individual, namely of being a child member of multiple births, showed marginal significance and was not largely different from the RR of low-birthweight when missing values were included in the calculation.
The first reports that treated the relationship between families with multiples (twins in this case) and child maltreatment were that of Robarge et al. [1] and their expanded study [2]. However, their research interest was not necessarily twins as a risk factor for child maltreatment, but the stressful situation associated with the birth of twins due to the increase in family members, inadequate spacing of children and rearing more than one infant at a time. Although their questionnaire survey for mothers was hospital-based, their results suggested that the proportion of child maltreatment in families with twins was higher than in families with singletons. The noteworthy finding was that the twins themselves were not necessarily abused, but rather the siblings of twins. This means that having twin children can result in a reduction of the time and energy that the mother has for meaningful relationships with the father and other siblings within the family unit [1].
On the other hand, Nelson and Martin [3] reported that of 310 registered abused/neglected children, 16 (5.2 %) were twins, which was about 2.5-fold higher than the approximated general percentage of twins (2 %). They concluded that twins themselves were also at high risk, supporting the findings of Nakou et al. [7], which showed that 4 out of 50 registered abused children were twins. It is not surprising that multiples themselves are at high risk, since multiples had many general risk factors for child maltreatment, for example, low-birthweight, prematurity, birth defects, neonatal complications and so on. According to the nationwide hospital-based data provided in 1986 by Tanimura et al. [4], of 231 children subjected to abuse or neglect, 23 (10.0 %) were products of multiple births (22 were twins). They compared this percentage to that of twin deliveries (number of mothers) in the general Japanese population (0.6 %). They should have compared the percentage with that of live multiple births, since their research interest was the risk of being abused as a twin, not the risk of abuse occurring in families with twins. According to the vital statistics, the percentage of multiple live births among total live births in 1986 was 1.4 %. The percentage of twins in the maltreated population was, thus, around 7-fold (=10.0/1.4) higher than in the general population.
It is important to note that the ratio of the percentage of specific factors in fatal child maltreatment cases to the percentage in the general population, for example, the percentage of multiple births in fatal child maltreatment cases divided by the percentage of multiple births in the general population at birth, does not yield the correct estimation of RR. This method gives an alternative underestimation of RR, since this method did not consider the percentage of the singletons (unexposed) population and the age of the subjects, although the degree of underestimation seemed not to be fatal. This method has been used several times in studies of the child maltreatment of twins [3,4].
Using the data presented by Luke and Brown [8], the percentages of total maltreatment deaths before 1 year of age among singletons and multiple births from 1995 to 2000 in the US were recalculated as 0.0232 % (=4,325/ 18,636,575) and 0.0607 % (=47/77,460), respectively, which produced an RR of 2.62 with 95 % CI 1.96-3.49 per child. This value is slightly higher than the present result, but not higher than that estimated by Tanimura et al. [4], although the age distribution of the victims was very different. The difference between the present data, the data of Luke and Brown [8] and the data of Tanimura et al. [4] was that the former two data sets corresponded to fatal child maltreatment, i.e., child deaths, and the latter corresponded to survivors of maltreatment admitted to the hospital. The higher proportion of twins in the data of Tanimura et al. [4], however, was not rationally explained by this difference in the data. One possible explanation is that multiples in general might be admitted into the hospital compared to singletons due to other reasons than child maltreatment, thus they were apt to be over-ascertained. More research should be performed on multiple-birth status among the survivors of child maltreatment.
Most previous clinical studies focused on multiple births per child. This is not necessarily appropriate from the public health or preventive medical point of view, because most difficulties in child rearing related to multiple births were due to the rearing of more than one child of the same age at the same time in the same family [5,[9][10][11]. For example, the comparison of two infants (twins) consisting of one low-birthweight twin and one non-low-birthweight twin sometimes is a source of stress for mothers. These anxieties or feelings of stress may not be induced if rearing only one low-birthweight singleton. If multiple births were treated as individual births, the associated risk of rearing two or more children of the same age at the same time in the same family would be underestimated. The rapid increase of iatrogenic multiple births is now a public health concern, one that goes beyond purely obstetric problems [12]. Nevertheless, this serious situation is rarely recognized not only among child support members, but also among professionals in the field of parent and child health and even in families with multiples themselves [12].
According to the vital statistics, the total fertility rate tended to decrease and fell to below two over a long period of time in Japan. This suggests that the risk of having at least one maltreated baby in one family may become higher in families with multiples, which have at least two children, than in families with singletons.
The present results also showed that teenage pregnancy was a significant risk factor for fatal child maltreatment. Luke and Brown [8], using US vital statistics, showed an increased risk of infant maltreatment deaths among healthy, full-term infants among those born to mothers aged 24 and younger.
Most of the limitations of the present study could be attributed to the data collection system itself. Although this study was based on the annual reports of nationwide survey, the data gathering was far from comprehensive. The very high percentage of missing values of all three risk factors showed the difficulties of gathering data on child maltreatment. The present RR should be interpreted as the general tendency of these three risk factors.
Many of the problems that occur during pregnancy and the perinatal periods are associated with one another. For example, multiple births are associated with many perinatal problems, such as low-birthweight, Caesarean section, neonatal asphyxia, impending abortion/threatened premature delivery and pregnancy hypertension. For example, about 70 % of multiples are low-birthweight in Japan [5]. Being a member of a multiple could be considered an additional risk factor for low-birthweight. The present aggregation data cannot permit multivariate analyses restricting the confounding factors.
According to the recent report by Schnitzer et al. [13], no single data source was adequate to provide thorough surveillance of fatal child maltreatment, but combining just two sources substantially increased case ascertainment. Unfortunately, most record linkage, including that between birth records and child maltreatment, was almost impossible in Japan. The assumption that the percentage of the exposed population in the general population was constant for children from birth to 17 years of age, which was made in the calculation of RR was not necessarily appropriate.
The percentage of the exposed group might gradually decrease with age, since the children in the exposed group would die more frequently compared to the children in the unexposed group because for reasons other than child maltreatment, especially at an earlier age. This seemed, however, to have little effect of the present results, since fatal child maltreatment is very rare, and the mortality rate of children themselves is extremely low in Japan.
In conclusion, recent Japanese nationwide data showed that families with multiple births had elevated risk for fatal child maltreatment, but this risk was not as high as previously thought. Multiple births should be considered a risk factor for child maltreatment, not only per individual child, but also per family unit. Health care providers should be aware that multiple pregnancies/births may place significant stress on a family, and they should provide appropriate support and intervention beginning with pregnancy as a potential high-risk group.
---
Conflict of interest
The author declares no conflict of interests. | 18,669 | 1,615 |
cb1a8bd049db7e322cc781600539bbe9fd78aea2 | Early and Later Perceptions and Reactions to the COVID-19 Pandemic in Germany: On Predictors of Behavioral Responses and Guideline Adherence During the Restrictions | 2,021 | [
"JournalArticle",
"Review"
] | In March 2020, the German government enacted measures on movement restrictions and social distancing due to the COVID-19 pandemic. As this situation was previously unknown, it raised numerous questions about people's perceptions of and behavioral responses to these new policies. In this context, we were specifically interested in people's trust in official information, predictors for self-prepping behavior and health behavior to protect oneself and others, and determinants for adherence to social distancing guidelines. To explore these questions, we conducted three studies in which a total of 1,368 participants were surveyed (Study 1 | INTRODUCTION
On Sunday March 22, 2020, Angela Merkel, the German Chancellor, announced that in the fight against the spread of the novel Coronavirus, she and the prime ministers of Germany agreed that public gatherings of more than two people would be prohibited temporarily for 14 days (Frankfurter Allgemeine Zeitung, 2020). Movement restrictions and social/physical distancing provisions have never existed before in the Federal Republic of Germany, and so it was unclear how people would react to them. Obviously, the COVID-19 pandemic has raised many questions in many scientific disciplines. Social sciences offered an abundance of theories to predict and explain human behavior in extreme conditions -such as a pandemic. One of the first researchers recommending the application of relevant knowledge from the social and behavioral sciences to the context of the COVID-19 pandemic was Bavel et al. (2020). The extent to which they meet the research zeitgeist is reflected in the number of citations: In October 2021, only about 1.5 years after the publication of their article, it has already been cited over 2,400 times. We also wanted to contribute to a better understanding of how people behave in this new situation. Therefore, it was important for us to examine which variables are central for acceptance of the measure and behavioral responses in this context. Based on previous studies in the areas of pandemics (e.g., Ebola: Vinck et al., 2019), prevention measures (e.g., Rykkja et al., 2011), and risk communication (e.g., Baumgartner and Hartmann, 2011), we selected a set of potentially relevant variables. These include trust, political orientation, health anxiety, and uncertainty tolerance.
Like some previous studies (e.g., Longstaff and Yang, 2008;van der Weerd et al., 2011), we consider trust to be an important variable for human behavior in the context of a pandemic. The APA Dictionary of Psychology (2020) defines trust as "reliance on or confidence in the dependability of someone or something. " However, trust is a broad concept and can refer to different aspects, depending on the perspective. The relevant perspective for us at the time of the first study was trust in infection statistics from official authorities, that is, the figures communicated by official institutions and governments. Previous research has shown that trust in political systems may influence people's reactions to restrictions, that is, trust is positively correlated with acceptance of prevention measures in a society (e.g., anti-terror measures, Rykkja et al., 2011) and linked to law compliance (Marien and Hooghe, 2011). Also, Rowe and Calnan (2006) have shown that trust in public systems and authorities positively influences the way people follow instructions. Greater trust in policy makers is associated with greater compliance in health policies such as testing or quarantining. These relationships could also be demonstrated in past pandemics (e.g., Ebola: Morse et al., 2016;Blair et al., 2017;Asian influenza and H1N1 pandemic: Siegrist and Zingg, 2014). There are some good summaries of the relevance of trust in the context of Coronavirus pandemic (e.g., Balog-Way and McComas, 2020;Devine et al., 2021). Only recently, in the context of the COVID-19 pandemic, it has been shown that trust in institutions is associated with lower mortality rates (Oksanen et al., 2020). Since health authorities used infection and death statistics to justify their strict regulations and encouraged everyone to help "flatten the curve" (of new infections), we expected trust in these official statistics to be an important predictor of compliance with the protective measures. Therefore, we aimed at investigating trust in official information from different sources and formulated the following research question (RQ): RQ 1: How much do people trust in statistics on COVID-19 from official authorities?
In the course of the COVID-19 pandemic, the media constantly reported about people's reactions to the new circumstances. This included increased purchasing or even hoarding of products such as disinfectants, face masks, food and toilet paper (Statista, 2020a;Statistisches Bundesamt, 2020), as well as differences in people's compliance with social distancing measures (Lehrer et al., 2020;Statista, 2020b). Uncertainty about the virus itself, its origin, or the appropriate measures to combat it, coupled with a growing group of people who challenge established facts set the stage for the rise of conspiracy theories. In such an environment, merely trying to convince people of the severity of the disease and the effectiveness of the prevention measures may not be sufficient to encourage protective behavior such as social distancing. Therefore, it is important to not only understand how much people trust in official infection statistics, but also to explore further in the pandemic relevant variables. First, it must be understood which variables are central to behavioral responses to subsequently develop appropriate communication strategies. As behavioral responses, we considered three types of behavior: (A) Self-centered prepping behavior (e.g., stocking up on face masks, food, or other essential goods; the term is also used by Imhoff and Lamberty, 2020, for hoarding everyday goods in the COVID-19 pandemic), and protective behavior to not infect (B) oneself and (C) others. We differentiate here between protective behavior for oneself and for others for different reasons. For example, risk research shows that risk assessments differ depending on who the target person is (i.e., self vs. other, see Lermer et al., 2013Lermer et al., , 2019)). Furthermore, people differ in prosocial behavior (e.g., Eagly, 2009). While this is more pronounced in some than in others, it need not be related to their self-protective behavior.
Complex and alarming world events are often accompanied by the emergence of conspiracy theories (McCauley and Jacques, 1979;Leman and Cinnirella, 2007;Jolley and Douglas, 2014). These theories assume that the event in question is the result of a secret plot of a powerful group (Imhoff and Bruder, 2014). Previous research suggests that political orientation may be associated with conspiracy beliefs. For instance, van Prooijen et al. (2015) found a positive association between extreme political ideologies (at both sides the right and the left) and the tendency to believe in conspiracy theories. The authors conclude that "political extremism and conspiracy beliefs are strongly associated due to a highly structured thinking style that is aimed at making sense of societal events" (p. 570). A study in Italy has shown that believing in conspiracies is linked to right-wing political orientation (Mancosu et al., 2017). In their recent study in the context of the COVID-19 pandemic, Imhoff and Lamberty (2020) showed that conservative political orientation was positively associated with self-centered prepping behavior (e.g., stocking up on face masks, food, or other essential goods; the term is also used by Imhoff and Lamberty, 2020, for hoarding everyday goods in the COVID-19 pandemic). Due to these findings, we included political orientation in this research. Furthermore, at least two variables seem to be central to behavioral responses during health threatening events: health anxiety and uncertainty tolerance. Today, numerous studies can be found showing that the COVID-19 pandemic increased levels of anxiety (e.g., Baloran, 2020;Choi et al., 2020;Petzold et al., 2020;Roy et al., 2020;Buspavanich et al., 2021). Fewer studies, however, specifically examine health anxiety and its links to reactions to the COVID-19 pandemic. Research shows that anxiety is linked to safety-seeking behavior (Abramowitz et al., 2007;Tang et al., 2007;Helbig-Lang and Petermann, 2010). For example, health anxiety has been linked to an increase in health information searching (Baumgartner and Hartmann, 2011). Sometimes, however, health anxiety can lead people to avoid relevant information that creates discomfort (Kőszegi, 2003). Avoiding information about a diagnosis, for example, seems to help reduce stress and anxiety, while delaying beneficial action (Golman et al., 2017). In a recent article, Asmundson and Taylor (2020) report that people with high health anxiety also tend to engage in maladaptive behaviors such as panic purchasing. Thus, we were interested in the impact of health anxiety on people's behavioral responses in the COVID-19 pandemic.
Anxiety is associated with high uncertainty and often motivates people to take action which should reduce uncertainty (Raghunathan and Pham, 1999), such as increased information seeking (Valentino et al., 2009). The COVID-19 pandemic is a threat that is both dreadful and highly uncertain. Research has shown that these affective states strongly influence people's perceptions of risk (Fischhoff et al., 1978). Perceived risk is influenced by uncertainty (Vives and FeldmanHall, 2018). Uncertainty during the current pandemic is high because SARS-CoV-2 is a novel virus that has until recently not been known to scientists. As a result, it is unclear how the pandemic will develop and difficult to accurately assess one's personal risk. Uncertainty is a state that is perceived as discomforting and people generally strive to avoid it (Schneider et al., 2017). However, people differ in their tolerance for uncertainty (Grenier et al., 2005). Research on the tolerance of uncertainty goes back to Frenkel-Brunswik (1949) who observed that people systematically differ in dealing with ambiguous situations (Dalbert, 1999). People with a low level of uncertainty tolerance employ vigilant coping strategies such as intensified information seeking about the threatening event. In the context of the COVID-19 pandemic, this could result in reading the news more often than usual. At the same time, people with a low level of uncertainty tolerance tend to show avoidance strategies such as turning away from dreadful information about the threat (Grenier et al., 2005). Thus, we were interested in the impact of uncertainty tolerance on people's behavioral responses in the COVID-19 pandemic. Furthermore, the variables gender and age seemed important to us to be considered as well. Especially because results of recent studies in the COVID-19 context suggest that these are relevant characteristics regarding behavioral responses. For example, it was shown that women and older participants tended to be more willing to wear face masks (e.g., Capraro and Barcelo, 2020). Also, the results of a study conducted by Li and Liu (2020) suggest that women tend to be engaged in more protective behaviors during the COVID-19 pandemic than men. Furthermore, this also seems to be true for being of older age (Li and Liu, 2020). In sum, we aimed at understanding how trust, political orientation, health anxiety, and uncertainty tolerance, in addition to gender and age, influence people's self-centered prepping behavior and protective behavior to avoid infection of oneself or others.
---
Political Orientation
Participants' political orientation was measured using the Left-Right Self-Placement scale developed by Breyer (2015). This scale measures political attitudes on a left-right dimension with a single item asking participants to locate themselves on a 10-point Likert scale with the poles left and right.
---
Health Anxiety
Health anxiety was measured using the German version of the health anxiety inventory (MK-HAI) developed by Bailer and Witthöft (2014). This scale assesses the trend toward health-related concerns with 14 items on a five-point Likert scale (1 = strongly disagree to 5 = strongly agree); sample item: "I spend a lot of time worrying about my health." These items were averaged to an index of health anxiety (Cronbach's α = 0.93).
---
Uncertainty Tolerance
We measured uncertainty tolerance with the Uncertainty Tolerance (UT) Scale developed by Dalbert (1999). This questionnaire captures the tendency to assess uncertain situations as threats or challenges with eight items on a six-point Likert scale (1 = strongly disagree to 6 = strongly agree); sample item: "I like to know what to expect. " These items were averaged to an index of uncertainty tolerance (Cronbach's α = 0.70).
---
Self-Centered Prepping Behavior
Self-centered prepping behavior in the context of COVID-19 was measured using three items: "I bought face masks;" "I stocked up on food;" and "I stocked up on disinfectant."
The answer format was yes or no. Yes answers were summed up to a self-centered prepping behavior sum value. At the time of the study, it was not yet clear (at least to the public) that wearing a mask was more protective for others than for oneself. In addition, masks were a scarce commodity at the time. At the beginning of the Corona pandemic, not even system-relevant institutions (e.g., hospitals) were supplied with sufficient amounts of masks (Biermann et al., 2020;WHO, 2020a). Thus, masks were difficult to obtain at that time. Also, an official requirement to wear masks in public (e.g., while shopping and public transportation) was not introduced throughout Germany until April 29, 2020 (Mitze et al., 2020;The Federal Government Germany, 2020). We therefore understand buying face masks as a behavior to hoard a certain good to build up a stock on them for a certain period of time. With this understanding, we follow the conceptualization of self-prepping behavior described by Imhoff and Lamberty (2020).
Protective Behavior for Self-Protection
Protective behavior to avoid infection was measured using four items. Individuals were asked to indicate change in behavior or new behavior on a five-point Likert scale (1 = strongly disagree to 5 = strongly agree). Items were: "I avoid public transport (contact with other people/visiting cafés and restaurants/meetings with friends) in order not to get infected. " These items were averaged to an index of behavior change for self-protection (Cronbach's α = 0.81).
---
Protective Behavior for Others
The protective behavior to not infect others was measured using the same four items as to measure behavior change for self-protection. However, these items were related to other people; a sample item reads, "I avoid public transport to protect others. " These items were averaged to an index of behavior change for others (Cronbach's α = 0.88).
---
STUDY 1 RESULTS
To answer RQ 1, participants' trust in statistics on COVID-19 infections from different official authorities was analyzed, and the results are shown in Table 1. Trust in statistics from China was by far the lowest, whereas trust in statistics from the RKI was highest.
To answer RQ 2, we analyzed associations with behavioral responses by using correlation analyses. Results can be found in Table 2.
All variables, except uncertainty tolerance, showed significant associations with one of the three behavioral responses. To investigate the role of these variables to predict behavior change, we conducted linear multiple regression analyses. Findings are shown in Table 3.
Results show that lower levels of trust and higher levels of health anxiety are associated with more prepping behavior. Higher levels of trust in official statistics, being female and being of younger age within our sample were shown to be significant predictors for self-protecting behavior. There was also a tendency of higher levels of health anxiety to predict The scale ranged from 1 = strongly disagree to 7 = strongly agree.
behavior change to avoid infections, which did not reach the significance threshold of p < 0.05 (p = 0.09). In the third model, being female was significantly associated with behavior change to not infect others. Furthermore, this latter model also indicated a tendency of higher levels of trust in official statistics and more right-oriented participants to be less likely to change their behavior to protect others. However, these results did not reach the significance threshold of 0.05 (trust: p = 0.08; political orientation: p = 0.07).
---
STUDY 1 DISCUSSION
Six major findings arise from Study 1: (a) Trust in official statistics from different authorities depended on the source of the statistics: Data from China were believed much less than data from Europe or Germany. Data from the RKI were most trusted. (b) Trust in official statistics was negatively correlated with self-centered prepping behavior, but positively correlated with behavior to protect oneself and others. This is also in line with other studies showing that trust in institutions of the political system is positively linked to law compliance (e.g., Marien and Hooghe, 2011). Moreover, the public health recommendations mostly focused on hygiene behavior to avoid infections rather than self-centered prepping behavior. In other words, by showing less self-prepping behavior and more of the recommended protective behavior, participants complied with the official recommendations, which may explain why trust decreased self-prepping behavior. Furthermore, these results are in line with a recently conducted study in which social trust (trust in others) was negatively linked to self-prepping behavior during the COVID-19 pandemic (Oosterhoff and Palmer, 2020). (c) Health anxiety predicted both self-centered prepping behavior and behavior change to protect oneself.
Research has shown that anxiety leads to actions to reduce uncertainty (Raghunathan and Pham, 1999), and both selfcentered prepping behavior and recommended behavior changes (e.g., hygiene behavior) may serve this purpose among individuals with high health anxiety. Furthermore, anxiety has been repeatedly linked to general hoarding behavior (Coles et al., 2003;Timpano et al., 2009), and trait anxiety has also been positively linked to preventative behavior during the COVID-19 pandemic (e.g., avoiding going out and avoiding physical contact; Erceg et al., 2020). (d) Women were more likely to change their behavior to protect both themselves and others. Women not only tend to judge risks as higher than men (e.g., Slovic, 1999) but also engage more in caring behavior (e.g., Archer, 1996) and show more safety-seeking than men (Byrnes et al., 1999;Lermer et al., 2016a;Raue et al., 2018). However, it is important to note that safety behavior may also increase health anxiety (Olatunji et al., 2011), which suggest a potential bidirectional effect. (e) Participants with right-wing political orientations were less likely to change their behavior to protect others. In sum, these findings not only show differences in people's trust in official statistics depending on their source but also that trust influences their behavior. These study results demonstrate that trust gained through clear and transparent information and communication of public authorities is a key to decrease uncertainty, limit the spread of false beliefs, and encourage behavior change to protect everyone's health. A limitation of Study 1 is that we used a dichotomous answer format to assess participants prepping behavior. Furthermore, we did not measure explicitly the trust in government, acceptance of social distancing measures, and guideline adherence. Therefore, a follow-up study was planned where we would assess self-centered prepping behavior in a more detailed way. The aim was to reinvestigate the found correlations and to additionally include the variables trust in government, acceptance of social distancing measures, and guideline adherence and by this expanding the insights gained from Study 1. Herewith, we wanted to follow the call for replication-extension studies (Bonett, 2012;Wingen et al., 2020).
---
STUDY 2 METHOD
As the COVID-19 pandemic progressed, the duration of the government's restrictions was extended. To underpin our findings from Study 1, and to further explore the development of perceptions and reactions to the pandemic related restrictions, we replicated and extended Study 1. In addition to reinvestigating our three research questions, we addressed trust in the government as well as acceptance of and adherence to social distancing guidelines.
Trust in authorities is an important factor for the acceptance of many measures and is therefore particularly worth protecting and enhancing (Betsch et al., 2020d). As mentioned above, Rykkja et al. (2011) found that trust in political systems influences citizens' attitude toward prevention measures.
Research from previous epidemics showed that people who had less trust in the government took fewer precautions against the Ebola virus disease during the 2014-2016 outbreak in Liberia and Congo (Vinck et al., 2019;Oksanen et al., 2020). Furthermore, the social development at that time showed that acceptance of government measures per se is a particularly relevant variable.
During the pandemic, the media increasingly reported violations of the health protective measures, and the closure of businesses, which led to high rates of unemployment. Around mid-April, people started demonstrating against the measures (Kölner Stadt-Anzeiger, 2020). The behavior of participants in demonstrations against the current measures showed that acceptance of the measures has a strong influence on adherence to social distancing guidelines. Thus, we assessed participants' trust in the government and acceptance of measures and raised the following research question: RQ 3: Which factors influence adherence to social distancing guidelines?
To explore this RQ, we analyzed the impact of the variables relevant for behavior change from Study 1, as well as trust in government and acceptance of measures on adherence to social distancing guidelines.
---
Participants and Procedure
Our second online survey was conducted between April 8 and April 23, 2020. For the recruitment of participants, we used the same sampling strategy as in Study 1 -only the attention check item was changed. Again, data from participants who failed to answer the attention check item (i.e., If you would like to continue with this study then select "agree"; which was the fourth of five response options) correctly or did not finish the questionnaire were not included. We changed the attention check item in comparison to Study 1 because it seemed more valid. In the first study, the attention check was passed by clicking on the rightmost answer option. Here, however, participants could also have passed the check by showing a pattern when answering, as always by clicking on the rightmost
---
Measures
We applied the same measures for trust in official statistics (Cronbach's α = 0.85), political orientation, health anxiety (Cronbach's α = 0.93), and uncertainty tolerance (Cronbach's α = 0.70) as in Study 1. This also applies to the indices behavior change to avoid infection (Cronbach's α = 0.79) and behavior change to not infect others (Cronbach's α = 0.80). However, the item "I avoid visiting cafés and restaurants in order not to get infected [/not infect others]" was changed to "I pay more attention to the recommended hygiene rules than before the Coronavirus became known, in order not to get infected [/ not infect others]" due to the lockdown.
---
Self-Centered Prepping Behavior
In Study 2, we assessed self-centered prepping behavior in a more detailed way. In order address some limitations of the first study, a Likert scale was used instead of a dichotomous response format and a symmetrical formulation of the items ("purchased" instead of "stocked up" and "bought"). In addition, three more items were developed to examine a wider range of behaviors (e.g., buying hygiene products or disposable gloves).
In total, we used six items where participants were asked to indicate on a seven-point Likert scale (1 = strongly disagree to 7 = strongly agree), how much each statement applied to them: "Purchased face masks" and "Purchased larger quantities of food [disinfectants/toilet paper/hygiene products/disposable gloves] than usual. " These items were averaged to an index of self-centered prepping behavior (Cronbach's α = 0.79).
---
Trust in the Government
Participants' trust in the government was assessed using two items "I have great trust in the federal government" and "I have great trust in the state government" with a seven-point Likert scale (1 = strongly disagree to 7 = strongly agree) answer format. These items were averaged to an index of trust in government (Cronbach's α = 0.91). Due to Germany's federal structure, we surveyed trust in the federal government and state government separately -as did the COVID-19 Snapshot Monitoring (COSMO) project, which is a well-known repeated cross-sectional monitoring project during the COVID-19 outbreak in Germany (see for instance COSMO COVID-19 Snapshot Monitoring, 2020).
---
Acceptance of the Measures
To assess participants' acceptance of safety measures, four items with a seven-point Likert scale (1 = strongly disagree to 7 = strongly agree) answer format were developed: "I think the current measures taken by the German government to combat the COVID-19 pandemic are good, " "I think the German government's communication of current measures to combat the COVID-19 pandemic is good, " "I think the current measures taken by the federal government to combat the COVID-19 pandemic are appropriate, " and "I think that the people responsible for planning and implementing the current measures have the necessary competence. " These items were averaged to an index of acceptance (Cronbach's α = 0.89).
---
Adherence to the Social Distancing Guidelines
To assess participants' adherence to the social distancing guidelines, five items were adapted from a measure of behavior during the COVID-19 pandemic by Rossmann et al. ( 2020):
Participants were asked to indicate on a five-point Likert scale (1 = never to 5 = very often) how often in the last 10 days the following applied to them: "I met with friends who live outside my household;" "I met with family members who live outside my household;" "I met with older people;" "I violated the 1.5 meters distance rule;" and "I disregarded regulations on social distancing or movement restrictions. " These items were averaged to an index of guideline adherence (Cronbach's α = 0.64) and recoded so that higher values indicate more adherence.
---
STUDY 2 RESULTS
As in Study 1, to answer RQ 1, people's level of trust in statistics on COVID-19 from official authorities was compared and displayed in Table 1. Again, findings show that trust in statistics from China was by far lowest, whereas trust in statistics from the RKI was highest.
To reinvestigate RQ 2, correlation analyses from Study 1 were replicated and presented in Table 4. Whereas in Study 1, all variables except uncertainty tolerance showed significant correlations with behavior change, and in Study 2, all variables showed significant links with at least one behavior variable.
For comparison reasons, the same variables as in Study 1 were included in multiple regressions on the dependent variables of self-centered prepping behavior, behavior change to avoid infection, and behavior change in order to not infect others (see Table 5). Again, results showed that health anxiety was positively associated with self-centered prepping behavior and behavior change to avoid infection of one self. Further, the results again showed that trust in official statistics was positively associated with behavior change to avoid infections of oneself and others. Additionally, behavior change to not infect others was further predicted by being female and less right-oriented, which mirrors the pattern of Study 1.
To investigate who shows more adherence to social distancing guidelines and answer RQ 3, correlations of guideline adherence with relevant variables from Study 1 (gender, age, trust in official statistics, political orientation, and health anxiety) as well as acceptance of the measures and trust in government were analyzed in a first step. The correlation matrix can be found in Table 6.
All variables except health anxiety and trust in government showed significant correlations with guideline adherence. Thus, all variables showing significant links were included as predictors in a multiple linear regression with guideline adherence as the dependent variable. Findings are presented in Table 7. Results show that adherence to the social distancing guidelines was positively associated with higher levels of acceptance of the measures, being female and being of older age. There was also a tendency of more right-oriented participants to adhere less to social distancing guidelines (p < 0.10).
We also conducted a moderation analysis to test whether political orientation is a moderator of the relationship between acceptance of the measures and guideline adherence. We used Hayes' PROCESS tool (model 1). Results showed a significant interaction effect of measure acceptance and political orientation (B = 0.03, SE = 0.01, t = 2.46, 95%-CI = [0.01; 05]). Analyses of conditional effects revealed no relationship between measure acceptance and guideline adherence (B = 0.02, SE = 0.03, t = 0.53, 95%-CI = [-0.05; 0.07]) for less right-wing orientated participants (1 SD below the mean). For participants with average values (mean centered; B = 0.08, SE = 0.02, t = 3.25, 95%-CI = [0.02; 0.11]) and for those with more right-wing orientation (1 SD above the mean; B = 0.11, SE = 0.03, t = 3.77, 95%-CI = [0.05; 0.17]), results showed a significant relationship between acceptance of the measures and guideline adherence. Political orientation was non-normally distributed, with skewness of 0.08 (SE = 0.11) and kurtosis of 0.09 (SE = 0.22), indicating a right-skewed left-leaning distribution. The average value was M = 4.53 (SD = 0.08) slightly below the mean of the scale, the median = 5. Moreover, while analyses for gender showed no moderation effect (p = 0.869), we found that age was a moderator for the effect of acceptance of measures on guideline adherence (B = 0.02, SE = 0.01, t = 2.35, CI-95% = [0.00; 03]). Analyses of conditional effects revealed no relationship between measure acceptance and guideline adherence (B = 0.03, SE = 0.03, t = 0.83, 95%-CI = [-0.03; 0.09]) for participants aged around 23 years. For participants aged around 25 years (B = 0.06, SE = 0.02, t = 2.30, 95%-CI = [0.01; 0.10]) and for those aged around 29 years (B = 0.11, SE = 0.03, t = 3.95, 95%-CI = [0.06; 0.17]), results showed a significant relationship between acceptance of the measures and guideline adherence.
---
STUDY 2 DISCUSSION
Study 2 successfully replicated the findings from Study 1: (a) In the further course of the pandemic, there were still differences in trust in official statistics from different authorities: Again, data from China were believed much less than data from Europe or Germany, whereas data from the RKI were most trusted. (b) As in Study 1, results showed that health anxiety increases self-centered prepping behavior and behavior change to avoid infections. Also, trust in official statistics increased behavior change to avoid infections. Replicating the findings from Study 1, results from Study 2 indicate that being female, being less politically right-oriented, and having trust in official statistics increases behavior change in order to not infect others.
In addition to replicating findings from Study 1, Study 2 aimed at investigating influences on adherence to social distancing guidelines. Results show that guideline adherence was positively associated with older age, being female, less right-wing political orientation, and higher acceptance of the measures. A recently conducted study on guideline adherence during the pandemic in the United States also reports a small positive relationship with age (Bogg and Milad, 2020). However, the authors did not show a significant association with gender. Findings from previous research do however support the assumption that women tend to show more precautionary behaviors to avoid infections. For instance, studies show women generally practice more frequent hand-washing than men (Liao et al., 2010;Park et al., 2010). Furthermore, findings from a meta-analysis (Moran and Del Valle, 2016) indicate inherent differences in how women and men respond to pandemic diseases: women are more likely to practice preventative behavior (e.g., face mask wearing) and avoidance behavior (e.g., avoiding public transit) than men. The finding that adherence to social distancing guidelines was positively associated with being less politically rightoriented fits to findings from studies recently conducted in the COVID-19 pandemic in the United States. Conway et al. (2020) argue that although much research suggests that conservatives are more sensitive to disease threats, they seem to be less concerned about the COVID-19 pandemic than liberals. However, the authors add that this ideological effect diminishes as experiences with, and the impact of the COVID-19 pandemic grows. Furthermore, our findings are supported by another recently study conducted during the COVID-19 pandemic. In this study, liberals and politically moderates show more guideline adherence than conservatives (van Holm et al., 2020). It is intuitively plausible that guideline adherence increases with acceptance of the measures. However, the moderation analysis revealed that political orientation influences the relationship between acceptance of measures and guideline adherence. This interaction effect showed that for less rightwing-orientated participants adherence to social distancing guidelines was not linked to acceptance of the measures. This link was only found in people with moderate political orientation (average values) and in people with more right-wing orientation. These findings are in line with findings from other studies in the COVID-19 context. For instance, also Capraro and Barcelo (2020) report in a recent preprint that demographic variables and political orientation are relevant characteristics in the context of protective behavior. According to their findings, being female, being older, and being leftleaning are correlated with greater intentions to wearing a face covering. Also, studies from Gollwitzer et al. (2020) and Van Bavel et al. (2020) show that supporters of right-wing political parties were less likely to adhere to protective behavior compared to liberal or left-leaning individuals.
One year after Study 1 and Study 2, the COVID-19 pandemic was still having major impact on our daily lives and causing restrictions on social contact in Germany. However, since many may also have become accustomed to these circumstances, we aimed at reinvestigating our research questions.
---
STUDY 3 METHOD
As the COVID-19 pandemic progressed, restrictive measures in Germany also continued. Therefore, another goal of this research project was to investigate the research questions of the two preceding studies 1 year later. For this purpose, we conducted Study 3, a replication of Study 2. Since we had no assumptions regarding changes in perception and behavioral responses to the consequences of the COVID-19 pandemic, we did not formulate explicit hypotheses and instead reexamined our research questions.
---
Participants and Procedure
Our
---
Measures
We applied the same measures for trust in official statistics (Cronbach's α = 0.85), political orientation, health anxiety (Cronbach's α = 0.92), and uncertainty tolerance (Cronbach's α = 0.66) as in Study 2. This also applies to the indices selfcentered prepping behavior (Cronbach's α = 0.76), behavior change to avoid infection (Cronbach's α = 0.78), behavior change to not infect others (Cronbach's α = 0.80), trust in the government (Cronbach's α = 0.91), acceptance of the measures (Cronbach's α = 0.89), and adherence to the social distancing guidelines (Cronbach's α = 0.64).
---
STUDY 3 RESULTS
As in Study 1 and Study 2, to answer RQ 1, people's level of trust in statistics on COVID-19 from official authorities was compared and displayed in Table 1. The results show, as in the two previous studies, that trust in statistics from China was by far lowest, whereas trust in statistics from the RKI was highest.
To reinvestigate RQ 2, correlation analyses from Study 1 and Study 2 were replicated and shown in Table 8. Correlations between gender, trust in official statistics, and health anxiety with behavior variables were stronger than in the studies from 2020, whereas the link between age and political orientation with behavior change was weaker.
For comparison reasons, the same variables as in Study 1 and Study 2 were included in multiple regressions on the dependent variables of self-centered prepping behavior, behavior change to avoid infection, and behavior change in order not to infect others (see Table 9). As in the studies from 2020, results showed that health anxiety was positively associated with self-centered prepping behavior and behavior change to avoid infection of oneself. Furthermore, in 2021, health anxiety was positively associated with behavior change in order not to infect others. Further in line with the previous studies, the results showed that trust in official statistics was positively associated with behavior change to avoid infections of oneself and others. However, in 2021, these associations were much stronger. Additionally, being female was positively associated with all behavior variables, which mirrors the pattern of Study 1 and Study 2. However, political orientation was not associated with any behavior variable in Study 3.
To investigate RQ 3, asking who shows more adherences to social distancing guidelines, correlations of guideline adherence with variables used in Study 2 (gender, age, trust in official statistics, political orientation, health anxiety, acceptance of the measures, and trust in government) were analyzed in a first step. The correlation matrix can be found in Table 10.
As in Study 2 age, trust in official statistics and acceptance of the measures showed significant correlations with guideline adherence (the variable guidelines adherence was only collected from Study 2 onwards). For comparison reasons, the same variables as in Study 2 were included in a multiple regression on the dependent variable guideline adherence. Findings are presented in Table 11. As the correlational findings already indicated, results showed that adherence to the social distancing guidelines was positively associated with higher levels of acceptance of the measures, being of older age, and having more trust in official statistics.
As in Study 2, we conducted moderation analyses to test whether political orientation, gender and age are moderators of the relationship between acceptance of the measures and guideline adherence. We used Hayes' PROCESS tool (model 1). The distribution of political orientation in Study 3 was like that in Study 2. Also, here political orientation was non-normally distributed, with a skewness of 0.06 (SE = 0.11) and kurtosis of -0.11 (SE = 0.21), indicating a right-skewed left-leaning distribution. The average value was M = 4.40 (SD = 0.07) slightly below the mean of the scale, the median = 5. However, results showed no moderation effect for political orientation (p = 0.954). Moreover, neither gender (p = 0.988), nor age (p = 0.837) moderated the effect.
---
STUDY 3 DISCUSSION
Study 3 successfully replicated findings from Study 1 and Study 2: (a) Also 1 year after the surveys in March and April 2020, there were still differences in trust in official statistics from different authorities: Again, data from China were believed much less than data from Europe or Germany, whereas data from the RKI were most trusted. (b) Results from all three studies showed that health anxiety increases self-centered prepping behavior and behavior change to avoid infections. Also, trust in official statistics increased behavior change to avoid infections. Regarding behavior change in order not to infect others, results in Study 3 slightly differ compared to studies 1 and 2. Whereas in the first two studies, being female, being less politically right-oriented and having trust in official statistics were positively associated with behavior change to protect others, Study 3 indicates that political orientation is no longer a relevant predictor for behavior change in order not to infect others. Moreover, neither political orientation, gender nor age showed up as moderators in Study 3. Instead, health anxiety turned out to predict behavior change in order not to infect others. This leads to the assumption that the Corona pandemic has become less an issue of political orientation than of individual characteristics related to healthrelated behaviors. As Study 2, Study 3 aimed at investigating influences on adherence to social distancing guidelines. Again, results show that guideline adherence was positively associated with older age and higher acceptance of the measures. In addition, and contrary to Study 2, higher levels of trust in official information turned out to be a relevant predictor for guideline adherence, too. However, no associations were found with gender and political orientation. These findings indicate that the importance of the various predictors for guideline adherence changed as the global pandemic progressed. A relevant factor in this context may be that the levels of general acceptance with preventive measures declined substantially between the time points of studies 2 and 3. Thus, the importance of political orientation might have decreased because support for social distancing guidelines has declined in all population groups. This trend has already been suggested by Conway et al. (2020) who argue that ideological effects diminish as experiences with, and the impact of the COVID-19 pandemic grows. In contrast, trust in official information has become more relevant. This is consistent with recent findings from other studies. Bargain and Aminjonov (2020) found that higher trust was associated with decreased mobility related to non-necessary activities. Fridman et al. (2020) report that higher levels of trust in government information sources are positively related to adherence to social distancing.
---
GENERAL DISCUSSION
Today, there are numerous psychological studies on the COVID-19 pandemic context. However, many of these studies focus on screening for negative (mental health) effects of the COVID-19 pandemic. The aim of our studies was to capture early and later perceptions and behavioral reactions to the COVID-19 pandemic. Our three studies give insights into three important dimensions in the context of the COVID-19 pandemic: results from March 3, 2020, to April 21, 2020, show that trust in the RKI was consistently very high, even higher than trust in the German Federal Ministry of Health, the Federal Government and the WHO. However for 2021, results from Betsch (2021) show that trust in general (in government and in authorities) has declined somewhat. Furthermore, the present findings show that trust in the official statistics is a predictor of behavior change and guideline adherence. Therefore, effort should be made to ensure that trust in the data is maintained, especially in contexts where long-term measures are required, like the COVID-19 pandemic.
Health anxiety was linked to self-centered prepping behavior and behavior change to reduce personal risk in all three studies. These findings are not only intuitively plausible but also supported by other studies showing that anxiety is linked to safety behavior (e.g., Erceg et al., 2020). Our analyses also revealed bidirectional effects regarding health anxiety and prepping behavior (in Study 1-3) and between health anxiety and behavior change to avoid own infection (Study 1-3). Behavior change in order not to infect others was only associated with health anxiety in study 3. This is in line with research from Olatunji et al. (2011) and emphasizes the importance of further research in the context health anxiety. Age was not or only negligibly associated with self-centered prepping behavior. This is in line with findings from the German Corona Monitor regarding panic buying (waves 1, 2, and 3: Betsch et al., 2020a,b,c). However, gender seems to be relevant when it comes to behavior change to avoid personal and other person's risks. In all three studies, women reported higher values on the behavior change variables (both to avoid own infection and to protect others) then men. Previous research has shown that women are more safetyoriented (Lermer et al., 2016b), especially in the health domain (Thom, 2003;Lermer et al., 2016a). Women also tend to behave generally more pro-socially (Archer, 1996) than men. Our findings imply that these observations also apply during the COVID-19 pandemic.
Results from the present study (samples 2 and 3) indicate a positive effect of acceptance of measures and trust in the government, a moderate positive effect of trust in official statistics and a small negative effect of being more politically right-wing oriented (Study 2) on adherence to social distancing guidelines. Betsch et al. (2020d) report in their Corona Monitor that German acceptance of the measures had risen sharply since mid-March 2020 and then decreased somewhat, with some fluctuations, until April 2021 (Betsch, 2021). However, overall acceptance of most of the measures was still at a high level. Our study is line with these findings. Our results reveal that approximately 1 year after the outbreak of the Coronavirus pandemic, the adherence to official guidelines regarding social distancing declined somewhat. Research has shown that trust in authorities is an important factor for the acceptance of environmental measures (Zannakis et al., 2015) and adherence to health guidelines (Gilles et al., 2011;Prati et al., 2011;Quinn et al., 2013;Sibley, 2020). Political decision-makers and Adherence to social distancing guidelines was higher among people who were older, female, less right-wing orientation, and more accepting of the measures (Study 2). Betsch et al. (2020d) also reported small positive effects of age and (marginally significant effect) of being female on safety behavior (i.e., using face covering) in the context of the COVID-19 pandemic. Further analyses showed that the association between acceptance of the measures and guideline adherence was moderated by political orientation (Study 2). It should be noted that the variable political orientation was not normally distributed but slightly right-skewed left-leaning distributed. However, low values (1 SD below mean) can be interpreted as more leftwing oriented, average values (mean) as neutral and high values (1 SD above mean) as more right-wing oriented. Thus, the results can be interpreted as follows: for politically left-wingoriented participants acceptance of the measures had no effect on their guideline adherence, whereas data from politically neutral and right-wing-oriented participants showed a positive link between acceptance of the measures and guideline adherence. Interestingly, the antecedents of social distancing changed over the course of a year. Gender and political orientation no longer predicted adherence to guidelines in Study 3, while trust in government became more relevant. These findings are particularly important for the current COVID-19 pandemic and for future considerations in dealing with pandemics. Obviously, the importance of political orientation decreased as the Coronavirus pandemic progressed. From a practical perspective, policymakers should periodically review and challenge their assumptions about the public's perception of the pandemic situation. In this way, communication of the necessary measures can be adjusted in the best possible way. Here, it is of particular importance to maintain the trust of the public, especially when support for anti-Coronavirus measures decline. In addition to general trust in the government, however, trust in the government's competencies is especially relevant. Fancourt et al. (2020, p. 464) summarize: "Public trust in the government's ability to manage the pandemic is crucial as this trust underpins public attitudes and behaviors at a precarious time for public health. " We see further practical implications of these study findings primarily in that the results presented here may be helpful in developing and communicating interventions. The results confirm that perceptions and behavioral responses differ in Germany, both at the onset of the COVID-19 pandemic and 1 year later. As other studies (e.g., Warren et al., 2020) suggest, the government should not only ensure that trust in the government is and remains high but also consider how different groups of people are addressed in campaigns.
Today, more than ever, researchers are called upon to replicate research (Bonett, 2012;Wingen et al., 2020). This can be done by conceptual or exact replications (e.g., Stroebe and Strack, 2014). We assume that especially conceptual ones are important. That means that not that exactly same thing was done, but from the basic idea the same results are found. At the time of our data collection, it was not yet possible to foresee what the research on the COVID-19 context would be like. We very much welcome the fact that so many scientists are taking up this relevant topic. This will increase the likelihood of reducing the negative consequences of future challenges such as this pandemic. Some limitations of the study must be mentioned. All three studies were correlative cross-sectional studies. Therefore, no cause-effect relationships can be proven, and future studies should consider longitudinal studies. As in many psychological studies, our samples were convenience samples and consisted of students. However, since this institution where participants were recruited is a part-time university, the students are all employed and on average older than full-time students. Furthermore, in all studies, most of the participants were female. Women tend to perceive higher risks, show more risk-averse behavior than men (Byrnes et al., 1999;Harris and Jenkins, 2006) and are more anxious than men (Maaravi and Heller, 2020) which may have influenced the study's results. In general, there is a high consistency between our results and those of similar studies. For example, other studies have shown that women report higher levels of social distancing than men (Pedersen and Favero, 2020;Guo et al., 2021). This is in line with our findings regarding the fact that being female is a predictor for greater adherence to social distancing guidelines and behavior change in order not to infect others. Therefore, the unequal gender distribution in our sample does not seem to have distorted the results. Nevertheless, more emphasis should be put on a balanced gender distribution in future studies. Since we asked relatively personal questions (e.g., prosocial behavior), it cannot be guaranteed that there is no social desirability bias in the data. Socially desirable responding to questionnaire items is a general problem in studies relying on self-report. Consequently, future studies should aim to replicate our research findings with more indirect measures. However, the consistency of our results with the current state of research suggests that findings can be successfully replicated. Another important limitation concerns the fact that we only measured behavioral intentions but not actual behavior. Thus, future research should focus on identifying variables that can be used to observe actual behavior. Another interesting approach for future research is to consider individualism and collectivism. The results of a recently published study analyzing data from 69 countries show that the more individualistic (vs. collectivistic) a country is, the higher the COVID-19 infection rates were (Maaravi et al., 2021). Furthermore, future studies in the COVID-19 context should investigate the influence of information sources such as social network platforms in the context of trust (Bunker, 2020;Limaye et al., 2020).
Overall, the present findings are helpful to target specific groups for preventive campaigns in the context of a pandemic. The fact that differentiated communication can be relevant is also described by Warren et al. (2020) in the COVID-19 vaccine context. A review paper by Bish and Michie (2010) conducted to identify key determinants of safety behavior in the context of the 2009 H1N1 influenza pandemic reports that being female and of older age is linked to adopting safety behaviors. This is also confirmed by the results of present studies for the COVID-19 context. In addition, trust, less right-wing political orientation, and acceptance of measures were shown to be relevant variables for safety behavior. These findings show how important it is to consider individual differences when it comes to prevention measures implemented on a large scale for the sake of a greater good.
---
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are publicly available. This data can be found at: https://osf.io/y7hxe/.
---
ETHICS STATEMENT
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study.
---
AUTHOR CONTRIBUTIONS
All authors developed the study concept, contributed to the study design, and interpreted the results. Material testing and data collection were performed by EL and MH. The data were analyzed by EL and MH. EL drafted the manuscript, and MH, MR, SG, and FB provided critical revisions. All authors contributed to the article and approved the submitted version.
---
Conflict of Interest:
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's Note: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Copyright © 2021 Lermer, Hudecek, Gaube, Raue and Batz. This is an openaccess article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. | 55,000 | 641 |
4ac2c4f6677768b79dad31277cfd671def818aba | Peri-Urbanism in Globalizing India: A Study of Pollution, Health and Community Awareness | 2,017 | [
"JournalArticle",
"Review"
] | This paper examines the intersection between environmental pollution and people's acknowledgements of, and responses to, health issues in Karhera, a former agricultural village situated between the rapidly expanding cities of New Delhi (India's capital) and Ghaziabad (an industrial district in Uttar Pradesh). A relational place-based view is integrated with an interpretive approach, highlighting the significance of place, people's emic experiences, and the creation of meaning through social interactions. Research included surveying 1788 households, in-depth interviews, participatory mapping exercises, and a review of media articles on environment, pollution, and health. Karhera experiences both domestic pollution, through the use of domestic waste water, or gandapani, for vegetable irrigation, and industrial pollution through factories' emissions into both the air and water. The paper shows that there is no uniform articulation of any environment/health threats associated with gandapani. Some people take preventative actions to avoid exposure while others do not acknowledge health implications. By contrast, industrial pollution is widely noted and frequently commented upon, but little collective action addresses this. The paper explores how the characteristics of Karhera, its heterogeneous population, diverse forms of environmental pollution, and broader governance processes, limit the potential for citizen action against pollution. | Introduction
This paper examines the intersection between environmental pollution and people's acknowledgement of health concerns in peri-urban India. These peri-urban spaces have been "crying out for attention" [1] and are places where neoliberal policies, real-estate booms, land speculation, information technology advances, and the relocation of industrial waste have "transformed the pace of development" [2]. They are the consequence of urban expansion and a visible manifestation of urban socio-spatial inequalities. One such area, explored in this paper, is situated between the rapidly expanding cities of New Delhi (India's capital) and Ghaziabad (an industrial district in Uttar Pradesh), where urban growth has created a peri-urban interface; "a territory between" rural and urban livelihoods, activities, and services [3]. Peri-urban spaces are generally characterized by a predominance of poor and disadvantaged residents; a lack of services, infrastructure and facilities; degraded natural resource systems [4]; and, industrial hazards [1,2,5]. This research focuses on Karhera, a peri-urban village sandwiched between Delhi and Ghaziabad, which was, until about two decades ago, predominantly agricultural. Despite being surrounded by urban growth, some of Karhera's residents have continued to practice agriculture. This emphasis on agriculture forms an important component of Karhera and sets it apart from the rapidly-urbanizing spaces around it. However, the nature and scale of this agriculture has changed considerably. Cereal crops (maize, wheat, rice and sorghum) and vegetables have been replaced by spinach, grown to take advantage of the urban demand for fresh vegetables. Urbanization has also affected the natural environment and the resources used for agriculture, with less available land for cropping and fewer spaces to keep livestock. Other environmental resources-such as water-have become polluted and degraded, making it hard to continue farming. Not all is negative however, and these changes have been accompanied by new farming opportunities. Karhera is thus an ideal context in which to explore the significance of peri-urban places in relation to "dynamics, diversity and complexity" of pollution and health [6,7]. Such places have particular place-based characteristics and face significant sustainability challenges [8,9]. Karhera, like India's other peri-urban places, is also the physical manifestation of the globalization processes-created and molded by-regional agglomeration, liberalization, urbanization, global economic integration, and "re-structuring for globalized systems of production and consumption" [10,11].
Considerable attention has focused on how communities identify pollutants and toxins in their localities, and on how their mobilization can and does effect change. Much of this has emphasized participatory processes, in particular the collective actions undertaken in the form of popular epidemiology, often in conjunction with sympathetic researchers to co-produce scientific evidence of pollution and ensure engagements with policy makers [12][13][14][15][16]. While collective mobilization is seen as a viable means of challenging vested interests and of forcing government and policy actors to pay attention to environmental pollution and health threats [17], collective activism and citizen science research has not always guaranteed such emancipated outcomes [12]. Until recently, however, little attention has been paid to contexts where citizen science alliances do not occur, and where community residents fail to publically acknowledge-and do not collectively act upon-the intersections between environmental pollution and health. This recent research has explored the factors which constrain mobilization, and in doing so, have shown that this is not simply a result of the well-known diversionary tactics used by the state and political and economic elites (such as media control, limiting citizen participation, emphasizing the economic benefits; discrediting scientific evidence of harm and health, casting blame on potential sources of contamination, and bullying, ostracizing, and discrediting activists) to forestall resistance [18]. Adams and Shriver [19] show, for example, how protests against coal mining in Czechoslovakia struggle to direct their activism as economic and political flux create a situation where the targets are vague and ambiguous. Auyero and Swinstun [20] explore the "slow violence" perpetrated by petrochemical companies in an Argentinian Shantytown, where the environmental damage is neither dramatic nor visible. Here the slow accretion of toxins results in long-term habituation which in turn has meant that, even once the extent of the damage was known, residents remained uncertain and conflicted about their exposure to contamination and the associated risk. This research has also recognized that citizens may welcome the economic opportunities that produce environment degradation and corresponding health risks because of their poverty, economic dependence, and marginalization [18,21,22]. Underlying all of this work is an emphasis on place and economic context as shaping people's responses to environmental pollution and health threats, showing that the "relational interplay between place characteristics and their meaning-making for health is often contingent and contested" [7]. Whereas, environmental and health hazards in peri-urban areas have been thoroughly documented [23,24], this paper adds to an emerging body of work which examines people's conflicted identification of health and environment threats in peri-urban areas in the global south, and the place-based factors which shape the potential for collective action against environmental pollution. This article addresses this knowledge gap by exploring the peri-urban area of Karhera where, despite emerging environment and health threats associated with urbanization, there is little evidence of citizen mobilization tackling these issues. We investigate why this might be by constructing a descriptive demographic and social profile of area and its residents, and by exploring their perceptions and feelings regarding the intersection of environmental pollution and health in their rapidly changing peri-urban environment.
After outlining methods used in this research, this article goes on to present findings including the contextual and demographic profile of the area, and how it has changed over time as a result of urbanization. Highlighted are changes in the socio-economic composition of the area, shifts in land and water use/availability, and associated livelihood strategies, as well as increasing levels of apparent environmental pollution, growing potential health risks, and the perceptions of residents around these issues. This is then followed by a more in-depth discussion section which synthesizes these multiple shifts and the perceptions of residents to explore how these dynamics interact with each other in complex ways that ultimately undermine the potential for collective action. By making these underlying dynamics visible, this article provides important insights into the challenges faced by citizens and civil society groups who seek to build collective action movements against pollution and health risks in peri-urban areas.
---
Materials and Methods
In this paper, we integrate a relational view of place-which sees place as "having physical and social characteristics... [which] are shaped by and given meaning through their interactions with politics and institutions, with one another and, most importantly, with the people living in a place" [7]-with an interpretive approach, which seeks to integrate people's views with "an analysis of cultural phenomena, social conditions and structural constraints" [25]. We focus on people's diverse interpretations of facts, their emic experiences, on how they create meaning through social interactions; on how relationships and social dynamics have changed over time, on the effects of formal policies, and on political processes and power relations. Fieldwork was undertaken in Karhera, Ghaziabad District between August 2014 and May 2015, using a variety of fieldwork methods which sought to capture multiple ways of portraying and comprehending the place, Karhera [7]. This included surveying 1788 of the 2042 households in September 2014. The survey asked about household composition, caste, primary and secondary sources of livelihood, home ownership, perceptions of health and hospitalization (over the past 5 years), and land ownership. The survey was conducted by the authors of this paper, based at the Centre of Social Medicine and Community Health, Jawaharlal Nehru University. Researchers aimed to survey every household in Karhera, but 152 households were unwilling to be interviewed, and in 102 households no-one was home and doors were locked. Given that 88% of households ultimately participated in the survey, it is reasonable that the data is representative of the community as a whole. The researchers asked questions and completed the answers on paper survey instruments. The data was processed with SPSS version 22.0 (IBM Corporation, New York, NY, USA) and used to identify emergent themes around environmental degradation, pollution, health, and emergent risk.
Qualitative research methods, undertaken between October 2014 and May 2015, provided a detailed understanding of agricultural livelihoods in relation to changing circumstances and urbanization. Twenty in-depth semi-structured interviews were undertaken, 10 with men and 10 with women who were identified from the survey. The following criteria informed our selection: involved in agriculture (transporting, buying or selling produce, growing crops, working as laborers, sharecroppers, or leasing agricultural land), migrant household men and women or men and women who were original inhabitants and were willing to participate in the research. Informants came from different castes and socio-economic groups. These discussions usually lasted about an hour, and interviews took place in locations that were mutually convenient such as participants' homes or fields. These face-to-face interviews were conducted by the researchers in Hindi, recorded, translated and transcribed. The interviews examined agricultural-based livelihoods, people's use of natural resources (water, firewood etc.), and personal experiences of urbanization, poverty, and community relations. We then undertook four participatory mapping sessions with (a) men involved in agriculture; (b) women involved in agriculture; (c) men who were actively farming or marketing spinach; and (d) women associated with buffalo rearing and/or spinach. Participants were recruited in ways customary to anthropological research practice: through formal and informal interaction with the researchers, who lived in the community for periods of the research. These sessions, which lasted half a day and took place in a local Hindu temple, brought together small groups of people (between 5 and 12) associated with the dominant crop (spinach) and animal husbandry to collectively hand-draw maps of the Karhera they had known 20 years ago. As a part of the participatory mapping sessions, participants reflected on changing agricultural livelihoods; new uses of space and resources; and, the implications of these on labor practices, water availability, and gender relations. They also discussed topics such as food availability, distribution and exchange; food preferences; the nutritional and social values associated with agricultural crops; changing political relations associated with access to land and water; and, poverty and health. The interviews and participatory mapping sessions complemented the survey by providing further insights into the relationships between agriculture, livelihoods, and people's collective responses to social and environmental change and by facilitating triangulation of patterns and trends.
Finally, a review of media articles on environment, pollution, and health in the Trans-Hindon, between 2005 and 2015, was undertaken. This was initiated by searching the Centre for Science and Environment (CSE) website. The CSE website has an India Environment Portal (http://www. indiaenvironmentportal.org.in/) which archives articles from leading newspapers, books, magazines, etc. related to issues of environmental concern. The CSE's magazine, "Down to Earth", was searched for significant articles related to Hindon river and Trans-Hindon region and two English daily newspapers website (The Hindustan Times and The Hindu) and two regional-language daily newspapers website (Jagran and Amarujala) were selected because of their popularity and readership in the region. The search terms identified for this stemmed from the qualitative research and were: agriculture, fertilizer, pesticides, health risks, diseases, cancer, pollution, waste water, effluents, industrial waste, poverty, and urbanization. Special attention was focused on articles that linked environment, pollution and health with political mobilization. This exercise allowed us to see what types of pollution and, health issues were being taken up by local newspapers and which may reflect, or influence, residents' levels of concern.
Standard social science ethical procedures were followed, including adhering to the principles of informed consent and confidentiality. Pseudonyms are therefore used in this paper. Participants were clearly informed that they could withdraw at any time without facing negative repercussions for doing so. Ethical approval was received from the University of Sussex (ER/PLW20/1).
Of final methodological consideration are the strengths and limitations of this study. While this research offers unique insights into Karhera's residents' perceptions of environmental pollution and health risks in their changing locality, it cannot provide hard evidence on the links between pollution and resulting health problems as scientific investigations were not undertaken. That said, the strength of this paper lies in its exploration of local perceptions of pollution and health risks in contexts where these are not highly evident, and assessing the implications of this for potential future mobilization. Another limitation of this study is that it was not designed for replicability. This is not unusual in relational, place-based research where emphasis is on situating people's personal accounts within socio-economic and political broader contexts associated with a very specific place.
---
Findings
---
Ghaziabad and Karhera: Context and Demographics
Ghaziabad District consists of four Tehsils (divisions), the largest, and most densely populated of which is Ghaziabad Tehsil. It includes a diversity of urban and peri-urban settlements, with people variously accommodated in villages, unauthorized colonies, slums, and middle-class colonies. This area has been enormously affected by the transformations in nearby Delhi-for example, through the loss of farmland to urban development and through the relocation of polluting industries from Delhi to Ghaziabad [26,27]. The population in Ghaziabad has grown as people are attracted by its urbanizing nature, including rural migrants looking for work and urban populations relocating because of cheaper housing and improved commuting possibilities.
Karhera is a former agricultural village situated within the Ghaziabad Municipal Corporation, an administrative area of Ghaziabad Tehsil. In 1987 this area was converted into a Nagar Parishad, a designation indicating its urban status. Karhera is bounded, on one side of the village, with a line of industries and production units. The other side of Karhera is bounded by the Hindon River.
In the 2014 survey undertaken as a part of this research, almost half of the people surveyed were original inhabitants who lived in Karhera when it was an agricultural village (44%), while the remainder were migrants (56%), attracted by the industries and the potential for work in nearby cities. Kahera thus has a heterogeneous population, which includes people of different castes, religions, and geographic origins. The 2014 survey shows that three quarters of the original inhabitants (75%) are upper-caste (primarily Rajput), and the remaining quarter is of lower-caste origin (primarily Dalit).
During participatory mapping exercises, Karhera's original residents reported that, 20 years ago, they had relied on agriculture as a primary source of livelihood. By 2014, our survey showed that only 24% of karhera's households (16% of original households and 8% of migrant households) still cited agriculture as their primary source of income. Agriculture, has, with increasing urbanization, become a secondary occupation and increasingly feminized. Animal husbandry too has decreased, and is now primarily for subsistence purposes. More than a third of both original and migrant households, 41% in the case of original inhabitants and 37% of migrants, depend on desk-based private sector employment such as teaching, insurance, working in call centers, and as estate dealers. A further 18% of original and 10% of migrant households have businesses as their primary source of income. These include repair and grocery shops, transport businesses, factories, and garages. Only a small percentage of original inhabitants (6%) and no migrants held government posts or were former-government employees. Only 15% of the original households, as compared to 30% of migrant households, relied on manual labor as their primary livelihood (drivers, factory workers, and mechanics). The majority of these manual laborers are lower-caste.
---
Transformations and Pollution in Karhera
Urbanization has involved several major transitions in terms of how busy and built up the area is, changes to the water supply and reductions in land availability; in conjunction with environmental degradation. Karhera, once a quiet rural village, is now a bustling area. New roads connect Ghaziabad to Delhi, traffic is constant and accompanied by noise, people, and pollution. New buildings, shopping malls, and urban activities now characterize the area [3,28].
Water has become scarcer. Large amounts of water are consumed by the industrial sector and by the government of Ghaziabad, which has installed submersible water pumps in peri-urban locations in order to supply water to the new urban establishments, including high-rise apartments and malls. The result has been a lowering of the water table. Areas such as Karhera have been affected by these urbanization processes, and their tubewells no longer provide adequate sources of water [3,26]. Water shortages have also reduced the amount of land suitable for cultivation. For example, much of the Hindon River bank is no longer suitable for cultivation. Agricultural land has also been diverted to urban use. This includes industrial clusters, infrastructure construction, new roads, real-estate development, and urban leisure activities. More specifically, the government has acquisitioned-over the years-Karhera's rural land: 42 acres in the 1960s for the Hindon Air Force base and the creation of Loni Industrial area; 104 acres for the 1987 Vasudhara Vikas Awas scheme and new urban settlements; and, land for the Ghaziabad Master Plan which came into force in 2005 [28]. In 2014, when a "City Forest" was created to meet the leisure and greening demands of an urban middle-class population, and a flyover and power station were built. Although some farmers reported receiving limited compensation for these land acquisitions, during interviews and mapping sessions, Karhera's residents emphasized that land had been taken from them. They have also, in the early 1990s, sold land to outsiders who built new residences in the "new Karhera colony". At the time of the research, the proposed development of the metro line led Karhera's land-holders to believe that they would soon lose more land.
---
Agriculture in a Rapidly Urbanizing Context
In 2014, almost half of Karhera's households (42%) still relied on agriculture to make some contribution towards their livelihoods. For just under a quarter of households (24%), it was their primary source of livelihood and it provided a secondary income for nearly a fifth of households (18%). There have nonetheless been significant shifts from predominantly cereal-based farming to intensive, small-scale spinach farming. This contrasts to Karhera's previous status, a primary agricultural area with a wide range of staple and vegetable crops, best known for its wheat and carrots.
Irrigated agriculture occurs on fields located about 4 km from Karhera's residential area. These fields are close to the Hindon, and have traditionally been irrigated by water from the river and wells. Over time, as the water table dropped, farmers were compensated by using borewells. These in turn have dried up and those farmers who can afford to, have installed water pump submersibles. When talking about these fields, the villagers refer to crops grown in "clean water". As shown above, however, both the Hindon and the ground water is highly polluted by industrial contaminants. A collective irrigation system, designed and managed by Karhera's residents and the Panchayat (the local form of village governance prior to Karhera becoming an urban ward), irrigates fields located closer to Karhera using domestic waste water (and where feasible tubewell water). Karhera's villagers decided upon this irrigation system about 25 years ago when domestic wells were becoming increasingly saline/polluted, and other traditional water sources (community ponds or jhora) were filled-in to make way for new roads. Residents also installed submersibles to ensure their domestic water supply. These factors, in conjunction with piped water and new urban behaviors (daily washing) led to large quantities of domestic wastewater or gandapani (literally, dirty water). As submersibles were expensive and water precious, drains were built to direct gandapani (domestic wastewater) to irrigate these fields.
Whereas, people had previously grown a wide variety of crops, including wheat, rice, carrot, turnip, bottle gourd, sorghum, and fodder, using gandapani facilitated green leafy vegetable growing in response to urban market demands. Spinach soon became "the crop". It thrived well in domestic waste, had a short production cycle, was in high demand, and was not liked by wild animals. As one resident explained during an interview: "It so happened that because of the shrinking of the forests, wild pigs and NielGai (antelope) started to destroy our standing crops".
The wastewater used in the irrigation system was, at first, only from cooking and bathing. However, as submersibles were installed in the village, even more wastewater became available. This eased the villagers' needs for domestic water and meant that women could wash clothing at home, rather having to go to the river or a stream. This also meant that flush toilets became more common, and as more and more migrants settled in Karhera, so the amounts of wastewater increased. In addition, once Karhera became an urban ward, the village Panchayat disintegrated. As a consequence, the drains were no longer maintained and gandapani came to contain fecal matter. Spinach thrives in this polluted water, growing very quickly and providing, all-year-round, a harvest every 20-30 days. A group of Rajput (upper-caste) elderly women discussed the subsequent prioritization of spinach farming in a participatory mapping exercise:
Carrot and wheat will take a minimum of 4 months to mature. But in the same time spinach grows throughout year and it takes hardly takes one month, now see this is one kiyari (plant bed), and in this kiyari the spinach is matured now. We will cut this and at the same time in the very next kiyari we sow another spinach. So it's easy and it will work and carry on like this. It does not need much physical work, there is no need of plough the field, just give water, put the seed in or spread the seeds into the field. That's all it needs.
During participatory sessions, residents explained that spinach farming is lucrative and has brought financial stability to those families engaged in cultivation. It has also enabled women to achieve financial independence. For example, a widow named Shanti Rani was one of the first women in Karhera to take up, and to survive exclusively on spinach cultivation. As other women, discussing Shanti explained: "Yes, we are making good money. Look she is growing spinach by herself; she cuts the harvest and sells in the market. So the money remains with her".
In Karhera, spinach grown in gandapani is preferred to that grown in "clean water" because the use of wastewater reduces production costs (water is free and less fertilizer is required). In addition, there is no differentiation in sale price, and some men, in participatory mapping sessions, suggested that gandapani spinach is more marketable: "The spinach which is grown in unclean water sells faster in the market because of its shine. It shines because it is getting pure and natural dung so this is the difference". Mother Dairy is the only buyer that specifies that crops must be irrigated in "clean water" crops. Created as a government subsidiary, this private company buys and sells agricultural produce. Vegetables and fruit are dealt with through Safal, which aims to establish a "direct link between growers and consumers" in order to provide fresh, healthy produce. In the Delhi metropolitan region, Safal is synonymous with "quality, trust and value". However, Safal/Mother Dairy does not pay more for this spinach.
Arable land supplied with gandapani is highly desirable because of the access to free water; proximity to the village so less time is spent walking and transporting equipment and produce; and, because of the frequent and high spinach yields. Moreover, "clean water" fields require infrastructure to ensure constant irrigation. The tubewells and borings previously managed by that farmers no longer provide water because of the lowering of the groundwater table. The installation of submersibles is expensive and the benefits uncertain. As Suraj explains, "they also need to spend on generator to run the submersible and tractor to till the land. It means only those farmers who can spend around 5 lakhs (or 500,000 rupees) can cultivate".
Take for example, the following two cases, both derived from the in-depth interviews: Jayawati and her husband's tubewells failed when the government submersible, which provides water to middle-class colonies in Ghaziabad, was installed. Previously it was possible to access ground water at 30 feet, but now, Jayawati says, it is below 250 feet. "Earlier we have boring in our field, but it does not work now". They decided to install a submersible. "I cannot allow my children to starve, so I have taken a loan of 2 lakh rupees and have installed the submersible in the field".
Sushil's land is located near the foothills where there is no connectivity to the gandapani drainage system. The tubewell on his land failed due to the government submersible that has been installed alongside it. This has meant that he has to buy water at the rate of 350 rupees per hour. For him, agriculture has become very costly. After all the expenses on water and fertilizer; he makes only a small profit or, in his words, he "is hardly left with some money".
As a consequence, some land owners allow their lands to lie barren as they cannot afford to farm. Others have, for the same reason, decided to sell. As one farmer explained, "Here the price of land is very high i.e., 50 thousand per gaj [square yard]. So, the farmers prefer to sell their lands, than starving". Gandapani spinach farming is, however, thriving.
---
Environmental Pollution
Despite these pressures on land and water, almost half of Karhera's population (42%) is involved in farming in one way or the other. This means that these people are intimately connected with the environment on a daily basis. They are the ones most affected by-and best placed to recognize-environmental pollution and degradation. They are the ones most likely to experience any health consequences because of their close contact with the water, air, and soil. In Karhera, pollution concerns primarily the water (described above) and the air.
As is the case in many of India's peri-urban villages, Karhera's residents are aware of industrial contamination. Industries are known to be pumping untreated water and effluents into the Hindon. This contamination has percolated the soil, making ground water unsuitable for human consumption. Industries and factories have also reportedly pumped toxic water directly into the water table. These forms of pollution are well recognized by Karhera's residents, who all stressed the poor quality of water. They complain that this has polluted the river, which had previously been a significant source of water. One such example, shared in an interview, comes from Umesh, who has lived in Karhera all his life:
For the villagers Hindon happens to be a very good river. We used to drink water from Hindon. We used to take bath also. Sometimes while working in the fields we used to even drink water. The water of Hindon used to be so clean that you can easily see any coin falling there. However, with the coming of the industries, the river water started to become dirty. The drains of the cities were connected to Hindon; the drains of the factories also were connected. Till what level would the river bear this pollution?
A second example comes from Harish Singh, a retired veterinarian and now farmer from Karhera: "From the time when the factories started to drain out the contaminated water, from then onwards the Hindon River started to get polluted". Nowadays, the water from the Hindon is, as villagers say, "black" from the factory drains, and is no longer used for irrigation or drinking. This polluted water has been linked to a range of diseases. As one upper-caste male vendor explained during the participatory mapping: "Look, some survey revealed that the Hindon River is causing cancer. There are cases of cancer in the village. Many people have been affected. The reason has been the [contamination of the] drinking water". This articulation of environmental health threats is echoed by Jamuna Devi, also from Karhera. She said "all the health problems such as cancer, high blood pressure, and joint pains among the people of young age are caused by the water".
Using sewage water for crop irrigation can also have negative consequences. Srinivasan and Reddy [29] argue that it can lead to increased levels of morbidity, and there is scientific evidence that heavy metal contamination in crops can stem from wastewater vegetable production, with spinach and other leafy vegetables being particularly prone to heavy metal uptake [30][31][32]. Growing spinach also requires significant amounts of time spent handling the crops and being exposed to the polluted water. This exposure can, after a few years, result in a range of ill-defined symptoms such as headaches, skin diseases, fever, stomach ailments, and diarrhea. Microbial infections (including pathogenic viruses, bacteria, and protozoa) may also be transmitted in this water. In the peri-urban areas of Hyderabad City, poor water quality produces "high morbidity and mortality rates, malnutrition, reduced life expectance, etc." [29]. This is particularly prevalent amongst women living in the villages, because of the time spent weeding and their extensive contact with the soil.
There is, however, no uniform articulation of environment/health threats associated with gandapani and spinach farming in Karhera. As revealed by the in-depth interviews, and participatory mapping, not everyone is comfortable with this form of irrigation and with the consumption of produce grown in wastewater. As Umesh says "the spinach grown in gandapani is not healthy. The root absorbs the dirty water and the polluting agents. These agents then enter into the plant. So when we eat that it enters the human body and causes disease". Amber Singh, who cultivates a variety of vegetables, avoids buying vegetables from the market as he cannot be sure about the water used to irrigate the vegetables. As he and his family are still able to irrigate their large landholdings with "clean water", and they consume only this produce. Similarly, Bina, only eats spinach grown in "clean water". She says that the wastewater used for irrigation contains latrine and toilet waste from all Karhera's households: "Everybody has put pipes and there are neither ditches nor tanks. And the (toilet) waste goes directly to the drain. And so their crop grows faster. . . . the impact on health is apparent. We never eat that spinach". Bina's mother-in-law added that, recently, her son "had got dhaniya [coriander] and it looked dirty . . . When I picked up, I saw it had feces on it". This was taken as a clear indication of health-harming contamination. These residents, who do articulate concerns with gandapani, have developed strategies to protect themselves from potential contamination. For example, because Kasturi Devi develops allergies when she is exposed to the wastewater, she wears shoes to protect her feet when working in the fields and, immediately after leaving the fields, washes her hands with Dettol. However, despite perceiving health risks related to gandapani spinach consumption, these residents' concerns did not inspire collective mobilization to challenge the practice (discussed further below).
Some people believed, however, that the spinach or exposure to gandapani was not unhealthy and articulated this in interviews. Jeevan Lal, for example, argued "There is no disease here due to spinach cultivation", and others agree that the water they use for irrigation is "just household water". As such, and as Santosh explained, there is no harm in using gandapani for irrigation and consuming the spinach. Dhanush similarly points out that the "gandapani remains in the roots of the spinach", and when the spinach is grown, the roots are thrown away. As a result, "there is no effect on health. And [this is evident because] for the past 16-17 years, sewage water is used to grow spinach [and no-one has become ill]". These inhabitants consume this spinach and do not articulate any concerns about possible environment/health threats.
Air pollution, like water pollution, is highly visible in Karhera. During our first community meeting with village elders, they pointed to the black smoke emitted from one of the nearby factories. Residents complained that washing hanging on the line became contaminated with black soot and questioned whether this soot was also entering their lungs and causing harm. According to the villagers, air pollution was the reason behind the increasing cases of non-communicable diseases. They drew a direct correlation between the health of Karhera's residents and the proximity of the factories, arguing that the factories caused tremendous harm. Tezpal Singh, for example, suggested that the toxins in the air could be causing cancer in the village. During a community mapping exercise, the men said: Diabetes, cancer, high [blood] pressure can be seen more [frequently]. There is a factory at the vicinity of this village [referring to a dye factory which colors jeans and/or a rubber factory which burns rubber]. The smoke from this factory spreads into the village.
The polluted air from the industries is also seen to affect crops. During the same community mapping exercise, the men directly associated polluted air with a plant disease called Chandi (lit. silver) which destroyed crops. The glittering coating and fungus was most prevalent each year during Diwali, and residents linked this to the additional contaminants in the air, caused by factory emissions combining with Diwali pollution.
Others commented that the factories exerted a tremendous negative impact on the health and agriculture of the village. Umesh said "we can see black marks on the leaves of the plants. The leaves of the plants get covered with the tiny black dust that comes in the air from the factory". Rajni argued that, sometimes, the spinach leaves in her fields shrink and dry due to this polluted air; at other times the leaves are infected by a fungus and her crop spoils.
---
Discussion: Peri-Urban Living
Environmental justice movements have worked with communities to challenge environmental pollution while simultaneously addressing health inequities [12][13][14]. In peri-urban areas, environmental justice issues are often particularly stark. While all peri-urban residents may experience air and water pollution, the poor are disproportionately affected in that they also lack decent sanitation and access to medical services, while working in unregulated conditions and with contaminated soil and crops [24,33], and may be particularly dependent on natural resources for their livelihoods [21,22]. Furthermore, they have far less ability to control their exposures and less choice. They cannot escape the unsavory water by purchasing expensive drinking water. Their constant and extensive exposure to a wide range of pollutants threatens their livelihoods and health. As Douglas argues, "there is a critical peri-urban human ecology where healthy crop plants and healthy human life go hand in hand" [24].
In other peri-urban areas of India, there has been considerable concern about vegetable production and the exposure to toxins and pollutants. Water and air pollution have been the subject of numerous newspaper articles and have, on occasion, resulted in civil society protests against polluting industries. In Karhera, there have only been a few isolated attempts to address environmental pollution despite official recognition of the contaminated environment [26]. Some of Karhera's residents had complained about a particular factory to the police station. But, nothing was done. Efforts such as these underline the lack of collective action in Karhera. There are no community NGOs addressing pollution, there are no local attempts to treat wastewater before irrigating, there are no Karhera activists and no local political leaders articulating environmental concerns (discussed further below). The reasons for this stem, in part, from the diverse views on whether spinach grown in gandapani is damaging to health and in part from the heterogeneous nature of the community, the mutual interdependencies between these residents and the diverse ways in which different members of the community benefit (or lose out) from urbanization.
Conventional literature focuses on how low-income, marginalized racial or ethnic communities tend to experience much higher levels of exposure to toxins [34][35][36][37]. These differential levels of risk mean that low-income communities are often far more aware of the pollution and toxins than middle-class residents who are better able to control their environments through mitigating measures [38]. Few studies examine contexts where the people living in low-income settlements do not recognize the health challenges associated with toxins or pollutants (but see [37]) or contexts where poor, marginalized communities and middle-class residents live cheek-by-jowl and where recognition of hazards does not follow socio-economic divisions. However, in Karhera exposure to pollution and recognition of risk cannot be disaggregated by class or identity. As revealed by surveys, interviews, and participatory mapping, here almost all farmers engaging in agriculture are using wastewater. This includes men, women, upper-caste, lower-caste, migrants, and original inhabitants. Half of the original upper-caste, land-owning farmers interviewed were happy to eat spinach grown in gandapani, while the other half were not. But all of the non-landed, whether upper-caste or Dalit, migrant or original inhabitants, ate the spinach they produced. In this group, most people did not acknowledge any potential health threats. However, the poorest of the upper-caste original inhabitants who no longer own land, ate this spinach while articulating concerns about gandapani. We found no clear distinctions between men's views and those of women. This lack of clear divisions reflects the heterogeneous nature of agriculturalists in Karhera. Here farming is undertaken by nearly half Karhera's residents: original inhabitants and migrants (both long-term and recently arrived), people who have large and small land holdings, people who farm animals (buffalo and pigs), and people who farm crops, upper and lower-caste residents and migrants, hired workers, farmers who rent land, share-cropping farmers and land-owners tending their own lands. Maintaining a livelihood through agriculture in Karhera requires constant interactions and mutual dependency across social divides of class and identity. Land-owners may depend on rents earned from leasing arable land or cultivation of their own plots. Some poorer residents depend on opportunities to work in others' fields, while others are engaged in purchasing and selling of produce in the markets. Ultimately, the large numbers of people directly or indirectly dependent on agriculture across the social spectrum is an influencing factor in continued marginal concern over health risks associated with gandapani cultivation and the consumption of produce grown under this method.
Mutual dependencies and the benefits of spinach production with gandapani water explain why mixed interpretations about the health/environment threats exist. Several studies have explored peri-urban communities' needs and demands around water in India [3,26,39]. Mehta and colleagues argue that, in India, few peri-urban residents believe that there is any value in making demands on the state, rather "both the rich and poor opt out completely of the formal system and need to fend for themselves" [33]. Mehta et al.'s research, looking into water pollution and mobilization in India and Bolivia, they found that, "when pushed", some Indian peri-urban residents said they would partake in collective action if organized by others, yet many others, such as migrant laborers, did not have formal residential status, felt more vulnerable and were unable to operate as "rights-bearing citizens" who could make sustainability and environmental justice demands on the state [33]. Instead of collective action and protest, India's peri-urban residents have devised their own, informal strategies to ensure access to water [26,33,40]. However, in Karhera there are other well-known and recognized forms of pollution. Why have these too, for the most part, been accepted and why has there been no collective action to address these more obvious forms of pollution? The explanations lie partly in the nature of Karhera and its peri-urban location; partly in the actions of government authorities, and partly in the way pollution is discussed in the Indian media. An appreciation of the context in which people live adds a crucial dimension to local perspectives on, and apparent acceptance of, pollution. As Corburn and Karanja [7] argue, drawing on an African example, understanding the complexity of informal settlements and the diverse determinants of health, require a relational place-based approach which focuses on the ways in which context defines and shapes peri-urban residents' perspectives and, in turn, informs policy.
---
Advantages and Disadvantages of Peri-Urban Living
Living in peri-urban Karhera has both advantages and disadvantages for all Karhera's residents. This ambivalence is, as shown below, most clearly evident in gandapani spinach production.
Karhera's upper-caste land-owning residents have retained the rhythms of rural life, their socio-cultural moorings, and ties to land. They are able to generate a livelihood from agriculture and have access to new urban markets for this produce. In some instances, spinach farming has been so lucrative that large land-owning original inhabitants have given up regular employment to focus on their farming. One such example is Bablu, who hires laborers to work in his fields and whose income today is more than he earned in his private job. Other advantages include, rather ironically, the fact that Karhera itself is rapidly urbanizing. Many upper-caste women prefer the pucca houses, concrete roads, electricity, and piped water. The submersibles ease women's domestic labor and gandapani means they do not need to invest labor in the irrigation of their spinach fields. The village, with fewer buffalo and cows, is perceived to be cleaner. The value of land in and around Karhera has massively increased and has facilitated new, lucrative forms of income, including renting accommodation to migrants, selling land at increased rates to property speculators, and white collar employment for literate upper-caste members. These additional urban-informed incomes have, as Malin and DeMaster [22] point out in their analysis of environmental injustice caused by hydraulic fracturing on Pennsylvanian farms in the USA, supplemented often-times marginal and insecure agricultural practices, but have long-term consequences in terms of environmental inequality. They term this a "devil's bargain" and in Karhera this takes the form of farmers' dependence on both agricultural production and on urban-and industrially-influenced economic activities, which in combination leads to incremental environmental degradation while simultaneously shoring-up agricultural production.
The upper-caste has experienced modern lifestyles, education, and a shift from manual labor, yet many are disillusioned. They are acutely aware of their loss of land. They also repeatedly stress their failure to influence government officials who do not come to the village and do not listen to them. This leads to a sense of disempowerment. They have also lost their sense of control over the lower-caste residents and their collective sense of being "owners of Karhera village". Although materially they survive relatively well on a combination of agriculture, rental agreements, and white collar jobs, unlike farming, this does not provide a sense of being their own masters. Karhera's upper-caste residents also experience a lack of political clout. Previously, as large land-holders in an agricultural village, they would have been the political elite. They would have constituted the panchayat and engaged with members of local government, particularly in the form of the Department of Agriculture. However, as Karhera is now an urban ward, they no longer have access to these forms of rural governance. In addition, as the Department of Agriculture is concerned with cereals and grains, rather than vegetable farming, few political connections remain.
Conditions have also improved for lower-caste inhabitants of Karhera. Many of the original Dalit inhabitants no longer work in agriculture. Instead, they are in employment in general stores, as petrol pump attendants, car mechanics, drivers, painters etc. Some of these residents commute to Delhi or Ghaziabad daily, working as laborers or daily wage earners. A very small proportion of original Dalit inhabitants are employed as civil servants, teachers, or in the police force. The increasing urbanization of Karhera has also meant that Dalit women, now able to travel beyond the village boundaries, are finding work as domestic staff or as security guards in malls. Because original Dalit families never owned agricultural land and never kept buffalo, they had fewer opportunities to convert agricultural buildings into leased accommodation. Nonetheless, some families have been able to rent out one or two rooms in their homes. In some ways, Dalit villagers' social standing has improved: in the past, the Dalit settlement or basti, located on the edge of the village, was also the dumping ground for animal carcasses and Dalits performed caste labor (cleaning the village, working on upper-caste fields, collecting firewood). The inflow of migrants and the rent economy has reduced these spatial and social caste distinctions. Explicit discrimination along the lines of caste is no longer a common feature in Karhera. Correspondingly, offensive language and caste-related slurs, have reduced. As is evident above, the opportunities created by gandapani and spinach have been particularly beneficial for Karhera's Dalit farmers. Recall the example of Lukshmi, a single woman who transitioned from wage labor to being a farmer in her own right and hires land on which to produce spinach. Some Dalit men, like Shiv Kumar (described above), have been able to use it to set up their own businesses buying spinach from farmers and selling it in the market and, in this way, avoiding factory labor. A few Dalit families still practice pig husbandry (reared in their homes) to supplement their income.
Notwithstanding these improvements, Karhera's original Dalit villagers also experience a sense of disempowerment. Not only are they being replaced by Dalit migrant laborers, but they remain at the bottom of the social ladder and its power hierarchy. The poorest of the original Dalits, people such as Manjari (see above), continue to work as laborers on other people's fields. Some caste discrimination still remains: as one Dalit woman complained: "It was just the same, as it was in the past". For the Dalits, as for the upper-caste, life in Karhera is ambivalent, with both advantages and disadvantages, and they too have accepted the trade-offs. Some Dalits have, however, no option but to engage in gandapani spinach production. They neither own large areas of land nor have the financial resources to invest in the technology required for "clean water" irrigation. For them, the only farming option is spinach cultivation using domestic wastewater. Their experience of disease and health has not deteriorated either, as they have always done manual labor and always been exposed to domestic waste. Other lower-caste studies have also found that people trade off health for social and economic improvements [20,21,41,42].
Migrants too find the experience of living in Karhera to be one of gains and losses. They have come to Karhera because it offers cheap housing, provides access to education for children, and to jobs in the nearby factories. These conditions far exceed their rural opportunities. For example, Durga and her husband came because they could "hardly manage" their livelihoods in Bihar. Reflecting on their move, she says:
Where would we get money, how would we look after our children? Our land in Bihar is barren, how could we do cultivation? Simple sewing does not give you a yield and good harvest. You need water and fertilizer to get a good yield. At least here my husband can work in the factory. I work as an agricultural laborer on the land of the villagers.
Similarly, Srichand left Meerut 20 years ago in search of work. Initially he got work in a dye factory in Karhera. He was subsequently diagnosed with tuberculosis (after about 5 years of work). When he recovered, he chose not to return to the factory and started selling spinach instead. As he was able to generate more than his wages, he has continued selling spinach. For many of the migrants, the work that they get in the spinach fields is better than factory work. As Durga's comment suggests, Karhera offers a degree of economic and food security. For these reasons, few migrants complain about the use of gandapani in spinach production. Yet, despite the economic advantages of living in Karhera, the migrants complain of the arrogance and hostility of the upper-caste original inhabitants, and are acutely aware of their lack of wealth. Social tensions are palpable within Karhera village. Even though some migrants have lived in Karhera for more than 40 years and have bought their own homes, they are still referred to as outsiders. In addition, as industrial employers prefer migrant laborers who have less opportunity to unionize or demand better conditions, they are blamed for taking factory jobs away from original inhabitants. These jobs are seen to benefit only the migrants and not the villagers.
The janus-faced advantages and disadvantages of living in Karhera are symbolized in gandapani spinach production-which is productive and financially lucrative, but potentially damaging to health. Some upper-caste residents accept this trade off and therefore find no reason to complain, others have not accepted the trade off and thereby are ready to "see" the negatives as well. It is those upper-caste residents who have benefitted the least-the poorest of the village elite-who were most articulate about potential environment/health spinach threats. They, like other upper-caste villagers were aware of the media (see below), and of the villagers' political marginalization, but, unlike other Rajputs, they did not have "clean water" fields, or spinach from these fields, to consume. They thus felt their deprivation the most.
---
Precarity and Lack of Community in Peri-Urban Karhera
While such ambiguities exist for spinach farming, it is clear that industrial air and water pollution-and the potential for ill-health-is more widely accepted. However, there is still no collective action around this. In part, this is because of the interdependencies in Karhera: the upper-caste farmers and other upper-caste residents of the village rely on the migrants to lease property; the Dalits also rely on the migrants for rental gains, and many of them generate an additional income through the industrial economy in the form of new and additional markets and trades. Migrants also need the original residents for both employment and accommodation. Yet, despite these interdependencies, there is no sense of community. As one original Dalit woman explained, "there is no unity at all". Instead of a strong sense of community, our research gives a sense of people marking time in Karhera and life, as all these residents experience it today, is precarious. There are insecurities associated with the decreasing availability of water, the decline in arable land as more and more housing is erected, and, as middle-class expansion continues, many of Karhera's residents fear that there will ultimately many be no place for them. Already they are absent from both government political processes and corresponding media reporting.
The media review makes it clear that, over the past 10 years, there have been frequent articles about pollution in peri-urban India, which predominantly address new middle-class concerns. Very few articles have examined the intersection between pollution, environmental degradation, and health, and seldom focus specifically and exclusively on Ghaziabad, reporting instead on either several areas or the broader Trans-Hindon or Delhi-National Capital Region areas (as the Hindon River divides Ghaziabad city and its peri-urban peripheries with the area to the west known as the Trans-Hindon and the East as Cis-Hindon). Only six articles reporting on the Trans-Hindon have linked cancer to the industrial discharge into the main rivers, including the Hindon. One article suggests that a Trans-Hindon village experiences high incidence of cancer and bone deformities resulting from the elevated levels of toxic metals in the rivers [43]. Another article points to the untreated industrial effluents and untreated or partially treated sewage in the Hindon River, and connect this to the unsafe levels of chromium, arsenic, and fluoride in groundwater [44]. Yet another reports on air pollution in the Delhi-NCR region, highlighting Ghaziabad as having rising SO 2 and CO 2 levels, and links these to different kinds of cancers, Tuberculosis, high blood pressure, kidney failure, and heart failure [45].
Residents of Karhera-particularly upper-caste landholders have read these media articles, as indeed they told us during interviews. As their above quotes show, they too link pollution to cancer. This led us to investigate actual cases of cancer in the village. The household survey and all in-depth interviews enquired about cancer diagnosis in the households. Only one Dalit household reported a current case of cancer, which was not explicitly linked to any form of pollution. This failure to demonstrate a significant burden of disease goes partway to explaining why residents believe that pollution is causing them harm, but have not done anything about it. It is, in keeping with recent literature of other cases of environmental pollution, a context in which the damage to the environment has been undramatic and gradual [20], and in which evaluating the extent of the risk, pinpointing responsibility and allocating blame remains unclear, difficult, and ambiguous [18,19]. This ambiguity in identifying who to target is also a consequence of the way political engagement in Karhera has waned over the years. Former rural local governance, such as the Panchayat, no longer exists and agricultural government officials are no longer interested in Karhera. Current government institutions which do cover Karhera are complicated and remote. Randhawa's and Marshall's examination of government policy and water management plans in peri-urban Ghaziabad shows the complexity of local government (three ministries are involved, two center-level subsidiary bodies and four departments) and argue that this creates a context in which national level policy makers are able to "exclude themselves from the larger context of the problem and to represent the government's view", emphasizing technical solutions and scientific expertise (pumping stations, sewerage treatment plants) to solve problems in the future [26]. It also marginalizes junior staff who interacted much more closely with peri-urban residents and were thus more aware of the problems on the ground, yet were unable to address the problems or persuade their seniors to act or-where there were official plans already in place-to release funding to enable local officials to address particular issues. Furthermore, the expert and technical nature of the policy process, and of the policies, operated to exclude other local forms of knowledge, despite the fact that India has encouraged participatory processes in government policy [26,46]. Local opportunities for people's participation were thus unsympathetic to the grievances of the disempowered peri-urban residents, structured to facilitate the participation of middle-class urbanites, and, in any event, often unknown to poorer residents. Where Karhera's residents do have an opportunity to participate in governance processes, such as through an elected municipal councilor of the Urban Local Body (ULB), these posts are not particularly powerful, are not integrated into all the different structures, and are highly dependent on the person elected. Randhawa and Marshall show that some councilors play an active role, while others only intervene when people's access to resources are directly threatened. Ironically, in the case of Karhera, the current councilor is inactive and a former councilor has been involved in the development of urban facilities from which Karhera's residents are excluded.
---
Conclusions
Veena Das has pointed to the "difficulty of theorizing the kind of suffering that is ordinary, not dramatic enough to compel attention" [47]. Illness can be normalized, and interpreted as the kinds of things that happen to bodies. Pollution can similarly be normalized, does not always lead to collective action and may not be facilitated through participatory processes [18,20]. Rather, collective action requires a recasting of illness or disease, resources, and a degree of social trust both in the community and in the state. As Kasperson and Kasperson argue, participatory engagement is a learned skill built up through years of interaction with political processes and government officials. "Left out" from the vision of participatory theory and collective action "are those who do not yet know that their interests are at stake, whose interests are diffuse or associated broadly with citizenship, who lack the skills and resources to compete, or who have simply lost confidence in the political process" [17]. The potential for using citizen science in contexts such as these, where residents do not represent a cohesive community and where the peri-urban economy has tied local residents into relationships of mutual dependency which inhibits political mobilization, despite some residents' concerns about the health risks, remains to be explored in future research initiatives.
Pollution and contamination is complicated. In Karhera, pollution is obvious at one level (the "black river") and hidden at another (the contaminants in "clean water" or in gandapani), and there are few direct sensory responses or collective biomedical consequences. As such, considerable uncertainty often exists about whether pollution is harmful or not, or how much exposure is safe. Yet the identification of hazards is not just about the science of pollution. As Kasperson and Kasperson argue, oftentimes societies do not recognize and acknowledge pollution, waste, and other industrial hazards, in part because of the nature of the hazard (uncertainty about the science, lack of sensory experience or high disease burdens), and in part because of the nature of the society (because the pollution serves other purposes in society). This is even more pertinent in peri-urban contexts, where a relational approach and a focus on place deepens this analysis. As Corburn and Karanja [7] have argued, the nature of the place itself shapes the possibilities for meaning-making in for health. In India, the peri-urban space is targeted for urban development and a means of attracting international firms and industry [48,49] and governance excludes those who are not a part of this urban vision. In contexts such as these, dependency on natural resources in combination with other urban-and industrially-derived incomes can "elide other critical social and environmental concerns" [22], leading to a "devil's bargain" in which environmental degradation and health threats are accepted as part of a livelihood strategy. Karhera is just one small peri-urban village, caught up in and physically manifesting, broader processes of urbanization and globalization. But, it is a village in which the combination of a highly heterogeneous population, socio-economic tensions, and interdependencies between residents, and a lack of representation in the media, and political marginalization circumscribe residents' ability to engage in collective action. Collective action for environmental justice in peri-urban places may, as a result of this combination of factors, be far from the development agenda.
---
Author Contributions: Linda Waldman, Ramila Bisht, Ritu Priya and Fiona Marshall conceived and designed the field research; Rajashree Saharia; Abhinav Kapoor; Bushra Rizvi; Meghna Arora; Ima Chopra and Kumud T. Sawansi undertook the field research. The data was entered by Yasir Hamid and analyzed by Linda Waldman, Ramila Bisht, Meghna Arora Ima Chopra and Rajashree Saharia. Linda Waldman and Ramila Bisht wrote the paper, with contributions from everyone, and theoretical and analytical involvement from Ritu Priya and Fiona Marshall.
---
Conflicts of Interest:
The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results. | 63,941 | 1,456 |
7b274008caadd913eaad39c4cc38f19f081c3213 | Herding, social influences and behavioural bias in scientific research | 2,015 | [
"JournalArticle"
] | he mission of scientific research is to understand and to discover the cause or mechanism behind an observed phenomenon. The main tool employed by scientists is the scientific method: formulate a hypothesis that could explain an observation, develop testable predictions, gather data or design experiments to test these predictions and, based on the result, accept, reject or refine the hypothesis. In practice, however, the path to understanding is often not straightforward: uncertainty, insufficient information, unreliable data or flawed analysis can make it challenging to untangle good theories, hypotheses and evidence from bad, though these problems can be overcome with careful experimental design, objective data analysis and/or robust statistics. Yet, no matter how good the experiment or how clean the data, we still need to account for the human factor: researchers are subject to unconscious bias and might genuinely believe that their analysis is wholly objective when, in fact, it is not. Bias can distort the evolution of knowledge if scientists are reluctant to accept an alternative explanation for their observations, or even fudge data or their analysis to support their preconceived beliefs. This article highlights some of the biases that have the potential to mislead academic research. Among them, heuristics and biases generally and social influences in particular, can have profoundly negative consequences for the wider world, especially if misleading research findings are used to guide public policy or affect decision-making in medicine and beyond. The challenge is to become aware of biases and separate the bad influences from |
the good. Sometimes social influences play a positive role-for example, by enabling social learning. Condorcet's "jury principle" is another example of the power of collective wisdom: the collective opinion of a jury-in which each individual juror has just a slightly better than average chance of matching the correct verdict-is more likely to reach the correct verdict, but only if the individuals' judgements are uncorrelated. In other situations, social influence and collective opinions are unhelpful-for example, if people follow a group consensus even though they have private information which conflicts with the consensus. If researchers are aware of these pitfalls and the biases to which they might be prone, this greater awareness will help them interpret results as objectively as possible and base all their judgements on robust evidence.
---
M
any mistakes that people make are genuine and reflect systematic cognitive biases. Such biases are not necessarily irrational, in the sense of being stupid, because they emerge from the sensible application of heuristics or quick rules of thumb-a practical approach to solve problems that is not perfect or optimal, but is sufficient for the task at hand. Herbert Simon, an American Nobel laureate in economics, political scientist, sociologist and computer scientist, analysed rationality and his insights are helpful in understanding how heuristics are linked to socio-psychological influences affecting experts' beliefs. Simon distinguished substantive rationality-when decisions have a substantive, objective basisusually based around some mathematical rule-from procedural rationality, when decision-making is more sensible, intuitive, based on prior judgements, and "appropriate deliberation" [ Using heuristics is consistent with Simon's definition of procedural rationality: heuristics are reasoning devices that enable people to economise the costs involved in collecting information and deciding about their best options. It would be foolish to spend a week travelling around town visiting supermarkets before deciding where to buy a cheap loaf of bread. When planning a holiday, looking at customer reviews may save time and effort, even if these reviews give only a partial account. Similarly, heuristics are used in research: before we decide to read a paper, we might prejudge its quality and make a decision whether or not to read it depending on the authors' publication records, institutional affiliations or which journal it is published in. A more reliable way to judge the quality of a paper is to read all other papers in the same field, but this would involve a large expenditure of time and effort probably not justifiable in terms A vailability heuristics are used when people form judgements based on readily accessible information-this is often the most recent or salient information-even though this information may be less relevant than other information which is harder to remember. A well-known example of the availability heuristic is people's subjective judgements of the risks of different types of accidents: experimental evidence shows that people are more likely to overestimate the probability of plane and train crashes either when they have had recent personal experience or-more likely -when they have read or seen vivid accounts in the media. Objectively, car and pedestrian accidents are more likely, but they are also less likely to feature in the news and so are harder to recall.
Problems emerge in applying the availability heuristic when important and useful information is ignored. The availability heuristic also connects with familiarity bias and status quo bias: people favour explanations with which they are familiar and may therefore be resistant to novel findings and approaches. Research into the causes of stomach ulcers and gastric cancer is an illustrative example. The conventional view was that stress and poor diet causes stomach ulcers, and when Barry Marshall and colleagues showed that Helicobacter pylori was the culprit-for which he received the Nobel prize with Robin Warren-the findings were originally dismissed and even ridiculed, arguably because they did not fit well with the collective opinion.
The representativeness heuristic is based on analogical reasoning: judging events and processes by their similarity to other events and processes. One example relevant to academic research is Tversky and Kahneman's "law of small numbers," by which small samples are attributed with as much power as large samples. Deena Skolnick Weisberg and colleagues identified a similar problem in the application of neuroscience explanations: their experiments showed that naı ¨ve adults are more likely to believe bad explanations when "supported" by irrelevant neuroscience and less likely to believe good explanations when not accompanied by irrelevant neuroscience [3].
Finally, Kahneman and Tversky identified a category of biases associated with anchoring and adjustment heuristics. People often anchor their judgements on a reference point-this may be current opinion or the strong opinions of a research leader or other opinion former. Adjustment heuristics connect with confirmation bias: people tend to interpret evidence that connects with their preconceived notions of how the world works. In this case, beliefs will be path dependent: they emerge according to what has happened before. As explored in more detail
---
S
ocial influences come in two broad forms: informational influenceothers' opinions that provide useful information-and normative influenceagreeing with others based on sociopsychological and/or emotional factors. Informational influences are the focus of economic models of "rational herding" and social learning, based on Bayesian reasoning processes, when decision-makers use information about the decisions and actions of others to judge the likelihood of an event. Such judgements are regularly updated according to Bayes's rule and therefore are driven by relatively objective and systematic information. Social learning models are often illustrated with the example of choosing a restaurant. When we see a crowded restaurant, we infer that its food and wine are good because it is attracting so many customers. But a person observing a crowded restaurant may also have some contradictory private information about the restaurant next door: for example, a friend might have told them that the second restaurant has much better food, wine and service; yet that second restaurant is empty. If that person decides in the end to go with the implicit group judgement that the first restaurant is better, then their hidden private information (their friend's opinion) gets lost. Anyone observing them would see nothing to suggest that the empty restaurant has any merit-even if they have contradictory private information of their own. They too might decide, on balance, to go with the herd and queue for the first restaurant. As more and more people queue for the first restaurant, all useful private information about the superior quality of the second restaurant is lost.
This problem can be profound in scientific research. Adopting consensus views may mean that potentially useful private information, especially novel and unexpected findings, is ignored and discarded and so is lost to subsequent researchers. Evidence that conflicts with established opinions can be sidelined or deemed unpublishable by reviewers who have a competing hypothesis or contrary world view. In the worst cases, the researchers who uncover evidence that fits well with unfashionable or unconventional views may be ostracised or even punished, with well-known historical examples, not least Galileo Galilei, who was convicted of heresy for his support of the Copernican heliocentric model of the solar system.
---
H
erding, fads and customs can also be explained in terms of reputation building. When people care about their status, conformity helps them to maintain status, while departing from social norms carries the risk of impaired status. Reputation also survives a loss better if others are losing at the same time. If financial traders lose large sums when others are losing at the same time, they will have a good chance of keeping their job; but if they lose a large sum implementing an unconventional trading strategy, there is a good chance they will lose their job, even if, overall, their strategy is sound and more likely to succeed than fail. People often make obvious mistakes when they observe others around them making similar mistakes. In social psychology experiments, when asked to judge the similarity in length of a set of lines, subjects were manipulated into making apparently obvious mistakes when they observed experimental confederates deliberately giving the wrong answers in the same task-they may agree with others because it is easier and less confusing to conform [4].
The propensity to herd is strong and reflects social responses that were hardwired during evolution and reinforced via childhood conditioning. Insights from neurobiology and evolutionary biology help to explain our herding tendencies-survival chances are increased for many animals when the group provides safety and/or gives signals about the availability of food or mates. Some neuroscientific evidence indicates that, partly, herding activates neural areas that are older and more primitive in evolutionary terms [5]. Herding also reflects childhood conditioning. Children copy adult behaviours, and children who have seen adults around them behaving violently may be driven by instinctive imitation to behave violently too [6].
---
S
ocial influences, including social pressure, groupthink and herding effects, are powerful in scientific research communities, where the path of scientific investigation may be shaped by past events and others' opinions. In these situations, expert elicitation-collecting information from other experts-may be prone to socially driven heuristics and biases, including group bias, tribalism, herding and bandwagon effects. Baddeley, Curtis and Wood explored herding in "expert elicitation" in geophysics [7]. In geologically complex rock formations, uncertainty and poor/scarce data limit experts' ability to accurately identify the probability of oil resources. Bringing together experts' opinions has the potential to increase accuracy, assuming Condorcet's jury principle about the wisdom of crowds holds, and this rests, as noted above, on the notion that individuals' prior opinions are uncorrelated. Instead, expert elicitation is often distorted by herding and conventional opinions and conformist views will therefore be overweighted. Perverse incentives exacerbate the problem. When careers depend on research assessment and the number of publications in established journals, the incentives tip towards following the crowd rather than publicising unconventional theories or apparently anomalous findings [8]. When herding influences dominate, the accumulation of knowledge is distorted. Using computational models, Michael Weisberg showed that, with a greater proportion of contrarians in a population, a wider range of knowledge will be uncovered [9]. We need contrarians as they encourage us to pursue new directions and take different approaches. The willingness to take risks in research generates positive externalities, for example: new knowledge that the herd would not be able to discover if they stick to conformist views. In the worst case, social influences may allow fraudulent or deliberately distorted results to twist research if personal ambition, preoccupation with academic status and/or vested interests dominate. A recent illustration was the Diederik Stapel case: Stapel is a social psychologist and he manipulated his data from studies of the impact of disordered environments on antisocial behaviour. Marc Hauser, a former professor of psychology at Harvard University, published influential papers in top journals on animal behaviour and cognition, until an investigation found him guilty of scientific misconduct in 2011. Both were influential, leading figures in their field, and their results were unchallenged for many years, partly because members of their research groups and other researchers felt unable to challenge them. Their reputations as leading figures in their fields meant it took longer for whistle-blowers and critiques questioning the integrity of their data and findings to have an impact.
Deliberate fraud is rare. More usually, mistakes result from the excessive influence of scientific conventions, ideological prejudices and/or unconscious bias; welleducated, intelligent scientists are as susceptible to these as anyone else. Subtle, unconscious conformism is likely to be far more dangerous to scientific progress than fraud: it is harder to detect, and if researchers are not even aware of the power of conventional opinions to shape their hypotheses and conclusions, then conformism can have a detrimental impact in terms of human wellbeing and scientific progress. These problems are likely to be profound, especially in new fields of research. A research paper that looks and sounds right and matches a discipline's conventions and preconceptions is more likely to be taken seriously irrespective of its scientific merit. This was illustrated in the case of the Sokal hoax in which a well-written but deliberately nonsensical research paper passed through refereeing processes in social science journals, arguably because it rested well with reviewers' preconceptions. Another salient example is tobacco research: initial evidence about a strong correlation between cigarette smoking and lung cancer was dismissed on the grounds that correlation does not imply causation, with some researchers-including some later hired as consultants by tobacco companies-making what now seems an absurd claim that the causation went in reverse with lung cancer causing cigarette smoking [10].
Other group influences reflect hierarchies and experience, if, for instance, junior members of a research laboratory instinctively imitate their mentors, defer to their supervisors' views and opinions and/or refrain from disagreeing. When researchersparticularly young researchers with careers to forge-feel social pressure to conform to a particular scientific view, it can be difficult to contradict that view, leading to path dependency and inertia.
Scientific evidence can and should be interpreted keeping these biases in mind. If researchers support an existing theory or hypothesis because it has been properly verified, it does not mean that the consensus is wrong. More generally, social influences can play a positive role in research: replicating others' findings is an undervalued but important part of science. When a number of researchers have repeated and verified experimental results, findings will be more robust. Problems emerge when the consensus opinion reflects something other than a Bayesian-style judgement about relative likelihood. When researchers are reluctant to abandon a favoured hypothesis, for reasons that reflect socio-psychological influences rather than hard evidence, then the hypothesis persists because it is assigned excessive and undue weight. As more and more researchers support it, the likelihood that it will persist increases, and the path of knowledge will be obstructed. Journal editors and reviewers, and the research community more generally, need to recognise that herding and social influences can influence judgement and lead them to favour research findings that fit with their own preconceptions and/or group opinions as much as objective evidence.
---
Conflict of interest
The author declares that she has no conflict of interest. | 15,810 | 1,659 |
e498a8d329a547aaad597e152c9ff27cb99eb85a | A Multilevel Analysis of the Impacts of Services Provided by the U.K. Employment Retention and Advancement Demonstration | 2,013 | [
"JournalArticle"
] | The United Kingdom Employment Retention and Advancement (UK ERA) demonstration was the largest and most comprehensive social experiment ever conducted in the UK. It examined the extent to which a combination of post-employment advisory support and financial incentives could help lone parents on welfare to find sustained employment with prospects for advancement. ERA was experimentally tested across more than 50 public employment service offices and, within each office, individuals were randomly assigned to either a program (or treatment) group (eligible for ERA) or a control group (not eligible). This paper presents the results of a multi-level non-experimental analysis that examines the variation in office-level impacts and attempts to understand what services provided in the offices tend to be associated with impacts. The analysis suggests that impacts were greater in offices that emphasized in-work advancement, support while working and financial bonuses for sustained employment, and also in those offices that assigned more caseworkers to ERA participants. Offices that encouraged further education had smaller employment impacts. The methodology also allows the identification of which services are associated with employment and welfare receipt of control families receiving benefits under the traditional New Deal for Lone Parent (NDLP) program. | Introduction
The United Kingdom Employment Retention and Advancement (UK ERA) demonstration was the largest and most comprehensive social experiment ever conducted in the United Kingdom. It tested the effectiveness of an innovative method of improving the labor market prospects of low-wage workers and long-term unemployed people. UK ERA took place from October 2003 to October 2007 and offered a distinctive set of 'postemployment' job coaching and financial incentives in addition to the job placement services routinely provided by the UK public employment service (called Jobcentre Plus). This inwork support included up to two years of advice and assistance from a specially-trained Advancement Support Adviser (ASA) to help them remain and advance in work. Those who consistently worked full time could receive substantial cash rewards, called "retention bonuses." Participants could also receive help with tuition costs and cash rewards for completing training courses while employed.
The UK ERA demonstration differed from an extensive set of previous social experiments for low-income families that focused primarily on "pre-employment" (or "workfirst") services (see Greenberg and Robins, 2011 and Friedlander, Greenberg, and Robins, 1997 for a summary). 1 Most of these earlier experiments produced modest impacts and it was felt by policymakers and program evaluators that combining pre-and post-employment services and including financial incentives might strengthen the impacts of such programs.
UK ERA targeted three groups of disadvantaged people: out of work lone parents receiving welfare benefits (called Income Support in the UK), low-paid lone parents working part-time 1 One exception is an employment retention and advancement demonstration conducted in the US from 2000 to 2003 (see Hendra, et al., 2010). The US ERA was similar in many respects to the UK ERA and served as a prototype for the UK ERA. and receiving tax subsidies through the Working Tax Credit (WTC), and long-term unemployed people receiving unemployment insurance (called Jobseeker's Allowance in the UK). The UK ERA demonstration utilized a random assignment research design, assuring unbiased estimates of the program's impacts.
The formal evaluation of UK ERA (Hendra et al., 2011) covered five years of program impacts. Administrative records were used to document impacts on several outcomes (mainly employment, earnings and benefit receipt) during the five years subsequent to random assignment. For two of the three target groups (out of work lone parents and WTC recipients), the impacts were generally quite modest and not statistically significant for most of the evaluation period. 2 For the other target group (long-term unemployment insurance recipients), the impacts were statistically significant and sizeable, and persisted into the postprogram period.
Within the six districts in which UK ERA 3 took place, there are more than 50 local offices. The purpose of this paper is to try to exploit variation in program practices across these offices in order to determine whether certain features of the local programs' operations are systematically related to program impacts. Previous studies have shown that program impacts vary with operational procedures and types of services provided (Bloom, Hill, and Riccio, 2005, Greenberg and Robins, 2011). Thus, building on these previous studies, we attempt to get inside the "black box" of ERA implementation practices to see which elements of the "total package" tended to be associated with stronger impacts on employment and welfare receipt. 4 2 The US ERA targeted lone parents and like the UK ERA had generally modest impacts that were mostly not statistically significant. Of the 12 programs formally evaluated in the US ERA, only three produced statistically significant impacts (see Hendra et al., 2010).
3 Henceforth, we refer to the UK ERA as simply ERA. 4 For example, some previous studies (such as Hamilton, 2002) have found that programs emphasizing immediate job placement (e.g., job search assistance) generate larger impacts on employment than programs emphasizing human capital development (e.g., placement in education and training). In fact, some studies have found that human capital development programs can lead to short-run reductions in employment. However, a reanalysis of the California GAIN program by Hotz et al. (2006) found that over time the human capital approach can actually generate impacts exceeding those of the work-first approach.
The analysis uses a multi-level statistical model based on the methodology developed by Bryk and Raudenbush ( 2001) and first applied to the evaluation of social experiments by Bloom et al. (2005). We use both individuals and institutions as the units of analysis, an approach quite appropriate for examining variation in program impacts across offices. Other studies using a somewhat different methodology to exploit variation in office practices to estimate social program impacts include Dehejia (2003) and Galdo (2008).
Implementation practices were not randomized across offices and thus may have been related to client or office characteristics. Because of this, the analysis presented here is nonexperimental. We discuss later the assumptions required for the results to be given a causal interpretation and the reader should keep in mind that causal inferences are only valid if these assumptions are satisfied.
The analysis focuses on out of work lone parents receiving welfare. 5 This group is of particular interest because over much of the five-year follow-up period no statistically significant average impacts were detected on most of the outcomes studied (Hendra et al., 2011). Hence, if we are able to identify program features that are associated with inter-office variation in the impacts for this target group we will have added to the knowledge derived from the evaluation of the ERA program.
The remainder of this paper proceeds as follows. In section 2, we describe the ERA demonstration and what it was intended to accomplish. In section 3, we present the hypotheses to be tested in examining cross-office variation in ERA impacts. In section 4, we present the statistical model used to test these hypotheses. In section 5, we discuss the data used to estimate the statistical model. In section 6, we report our estimation results for welfare and emlpoyment outcomes. Results for earnings are provided in section 7. Finally, in section 8, we present our conclusions and policy recommendations.
---
The Policy Setting
The ERA demonstration builds on the New Deal for Lone Parents (NDLP) policy initiative introduced in the UK in 1998. NDLP's aim was to "encourage lone parents to improve their prospects and living standards by taking up and increasing paid work, and to improve their job readiness to increase their employment opportunities" (Department for Work and Pensions, 2002). NDLP participants were assigned a Personal Adviser (PA) through the public employment service office to provide pre-employment job coaching services. PAs could also offer job search assistance and address any barriers participants might have had that challenged their search for work. They also had access to an Adviser Discretion Fund (ADF) that provided money to help participants find employment. Finally, they advised participants on their likely in-work income at differing hours of work and helped them access education or training. NDLP participation was entirely voluntary.
The ERA demonstration project offered services beyond those available under NDLP, mainly in the form of in-work services and financial support. As noted above, these additional services included in-work advice and guidance plus a series of in-work retention bonuses to encourage sustained employment. Support for training was also available; ERA covered tuition costs and offered financial incentives for those in work to train. It also provided an in-work Emergency Discretion Fund (EDF) designed to cover small financial emergencies that otherwise could threaten the individual's continued employment. 6 Importantly, ERA services and financial assistance were available for only thirty-three months.
In order to evaluate the impacts of the multi-dimensional ERA program, a random assignment research design was utilized. NDLP participants who agreed to be included in the experiment were randomly assigned either to a program (or treatment) group that was eligible for the full range of ERA services and financial assistance or to a control group that could only receive standard NDLP services. 7 The randomization process was closely monitored and controlled. The fact that there were no systematic differences between the two groups prior to random assignment (results available from the authors on request) provides some reassurance that the randomization was carried out effectively.
---
Factors Influencing Variation in ERA's Impacts
The simplest measure of the impact of ERA is the difference in mean outcomes between the program and control groups over the follow-up period (five years in this paper). 8 The two outcomes examined in this paper are months receiving welfare and months employed. The impact of ERA on months receiving welfare, for example, is the difference over the follow-up period between the program and control groups in the average number of months receiving welfare. The follow-up period for ERA is five years, roughly three of which are while the program was operating and two are after the program ended. The results presented later distinguish between these in-program and post-program periods.
ERA impacts can vary over time, across persons, and across geographic areas.
Varying impacts over time may have multiple causes including changes in the amount and types of ERA services provided by program administrators, changes in the amount and type of services being provided to the control group under the traditional NDLP program, changes in environmental conditions and changes in the reaction time of participants to the new services being provided. Although we are able to estimate how ERA impacts vary over time, we do not have sufficient data to allow us to identify the precise causes of these varying impacts over time. 7 Goodman and Sianesi (2007) show that 70% of those eligible participated in ERA. Most nonparticipation (86% of cases) was due to (wrongly) not being offered the opportunity to participate. This varied considerably across offices. Participation was higher in areas of higher unemployment. Those already employed at the time of randomization were less likely to participate yet those with substantial prior employment experience were more likely to participate. In the first year after randomization, nonparticipants spent more time in work and less on welfare than participants. It appears, therefore, that offices' tendency to selectively offer the opportunity to participate resulted in the participant sample being made up of individuals with slightly less favorable labor market characteristics than the full eligible population.
Varying impacts across persons (sometimes called "subgroup impacts") can arise because certain types of individuals may be more susceptible to program services. For example, those with longer welfare histories or lower levels of education may have been harder to employ and less likely to have been able to use the ERA services effectively than persons with shorter welfare histories or higher levels of education. On the other hand, those with older children may have been more willing to utilize the ERA services than persons with younger children. As will be indicated below, our empirical model allows us to identify subgroup impacts.
Varying impacts across geographic areas may be due to different environmental factors and to different ways ERA was implemented across the various local welfare offices.
There are a number of environmental factors that could influence the impact of ERA. For example, persons living in areas with higher unemployment or, generally, in more deprived areas may have found it harder to have made effective use of program services. Our empirical model is specified to allow for the impact to vary with a measure of local area deprivation.
Cross-office variation in impacts can arise due to differences in the overall structure of the individual offices and differences in program implementation practices for both ERA and control group participants. For example, offices with higher caseloads may have been less successful in providing meaningful help to ERA participants, thereby rendering ERA less effective. Or, offices that placed more emphasis on immediate job placement may have had larger impacts than offices that emphasized human capital development. Or, offices that were already providing a rich array of services for control group families may have had smaller impacts than offices that were not. Bloom, Hill and Riccio (2005) find that impacts in several US based welfare-to-work demonstrations vary significantly with differences in program implementation practices across local welfare offices.
Office variation in impacts according to the way ERA was implemented is the major focus of this paper, although we also examine how impacts vary over time, with individual characteristics, and with environmental characteristics. Introducing office-level variation in impacts requires a more sophisticated statistical framework than is traditionally used in evaluation research. Specifically, the units of analysis are both the individual and the office and the statistical framework must take this nesting into account. As will be described in greater detail below, multi-level modeling provides a natural framework for analyzing variation in impacts across offices and across individuals within offices. Although the ERA demonstration took place across 58 offices, in practice operations among some of these offices were shared. 9 Where this applies, we have combined the offices, resulting in 37 distinct units of delivery which, for convenience, we continue to refer to as "offices" in the remainder of this paper. 10 Before proceeding with the specification and estimation of a multi-level statistical model, a fundamental question must be answered. Namely, is there enough variation in the impacts of ERA across offices so that implementation differences can possibly be explained by office-level characteristics? To determine this, we used a multilevel Poisson regression model with program group status as the only regressor in order to construct empirical Bayes estimates of the extent to which program effects on months receiving welfare and months employed varied across the 37 offices in our sample. 11 We estimated separate models for the in-program period (1 to 3 years post randomization) and the post-program period (4 to 5 years post randomization). We conducted formal statistical tests to determine whether the individual office-level impacts were significantly different from the average impact estimated over all offices.
Figures 1A and 1B present the empirical Bayes estimates of office-level effects.
Since these are generated by a multilevel Poisson model, they are reported as incidence rate ratios (IRRs). In other words, they are proportionate impacts such that a value of 1 indicates 9 For further details, see Dorsett and Robins (2011). 10 We also performed some analyses using the full 58 office sample, but the results were not as informative as the analyses performed on the combined offices sample. We are grateful to Debra Hevenstone for developing the methodology to combine the 58 offices into the 37 distinct offices. 11 We discuss the multilevel Poisson model in detail in section 4.
no effect (it implies an increase by a factor of 1). Similarly, an effect of 0.5 implies a reduction of 50 per cent and a factor of 1.5 indicates an increase of 50 per cent.
The welfare impacts (Figure 1A) range from 0.59 to 1.17 for the in-program period and from 0.55 to 4.63 for the post-program period. Although not visible from the chart, these very large impacts for the post-program period correspond to the smallest offices. The overall impact is shown by a vertical line in the figure. The employment impacts are given in Figure 1B. These range from 0.70 to 2.49 for the in-program period and from 0.72 to 1.69 for the post-program period.
For purposes of this paper, the important question is whether the variation across offices in the estimated impacts is statistically significant. We tested this using likelihood ratio tests, comparing our results with restricted results where the impact was not allowed to vary across offices. For both outcomes, the restriction was strongly rejected.12 Therefore, we conclude that there is sufficient variation in the impacts across offices to warrant a further, more sophisticated, analysis to determine whether part of the variation can be explained by office characteristics.
---
Methodological Framework for Explaining Cross-Office Variation
---
Estimation Approach
Our fundamental approach for examining variation in impacts across offices is based on a simple production function framework in which the implementation (or production) of ERA services within a particular office was related to a set of individual, environmental and office factors (or inputs). These factors are based on ERA participant needs and experiences as well as the manner in which ASAs provided the ERA services.
In examining variation in ERA impacts across offices, we focus on ERA services that are consistent with the primary objectives of the demonstration, namely retention and advancement services. Two basic hypotheses will be tested (the specific variables related to each of these hypotheses are described in detail below). First, we hypothesize that the strength (or effectiveness) of ERA's impacts (as opposed to the direction of impacts) will be systematically related to the intensity of ERA services (reflected, perhaps, by the amount of time advisers spend with each ERA participant). Second, we hypothesize that the strength of ERA's impacts will be related to the types of ERA services provided (such as help with advancement or help with finding education and training opportunities). Both of these hypotheses are relevant for policy makers. For example, if it is the intensity of services that matters, then hiring additional caseworkers may represent an effective use of public funds.
Or, if it is found that particular types of services are associated with greater impacts, then program operators who are not currently emphasizing such services might find it worthwhile to redirect their program delivery activities towards favoring such services.
For both the above hypotheses, the direction of impacts (as opposed to the strength or effectiveness) will depend on the nature of the ERA service. If, for example, the service emphasizes longer-term outcomes beyond the follow-up period (such as encouraging investment in human capital through additional take-up of education and/or training), the impact on months of employment during the follow-up period may be negative and the impact on months receiving welfare may be positive. On the other hand, if the ERA service emphasizes shorter-term outcomes during the follow-up period (such as in-work advice or information about monetary benefits available from ERA) the impact on months of employment during the follow-up period may be positive and the impact on months receiving welfare may be negative. From the policy maker's perspective, negative impacts on employment and positive impacts on welfare receipt during the follow-up period may be viewed as somewhat disappointing, however from the individual's perspective these may lead to better long-term outcomes, beyond the follow-up period.
It is important to keep in mind that when testing hypotheses about the relationship between the intensity and type of ERA services and the impacts of ERA, the control group plays an important role. Previous studies have identified the possibility of "substitution bias" in social experiments (Heckman and Smith, 2005, Heckman et al., 2000). Many control group members received services under the existing NDLP program that were similar to the services received by program group members under ERA. The impact of ERA will be influenced by the differential receipt of services between program and control group members. If control group members receive the same advancement services as program group members, then both might potentially benefit, but the impact of ERA would be zero.
Thus, when we measure services received by ERA program group members in a particular office, we need to construct them as the difference in the receipt of those services between program and control group members, to account for possible substitution bias. 13 The actual level of service receipt of control group members will influence control group (NDLP) outcomes, but not the impacts of ERA. 14 In addition to individual-level data, our analysis uses office-level variables for both ERA program group members and control group members that allow us to relate the interoffice differences in impacts to the particular characteristics of the offices. Consequently, a multi-level statistical framework is required (see Bryk and Raudenbush, 2001, and Bloom et al., 2005 The first captures random variation in the average office-level outcome for the control group.
The second captures random variation in the average office-level impact for the program group. It is the separate specification of the two error terms and the inclusion of office-level characteristics as explanatory variables that distinguish the multi-level model from the more traditional regression models used in the program evaluation literature.
The multi-level Poisson model described above has the following formal statistical structure: 15 (1) Pr (Y ji = y|υ j , μ
j ) = exp(-θ ji ) θ ji y /y! (2) Level 1: Y ji = α j + β j P ji + Σ k δ k CC kji + Σ k γ k CC kji P ji , (3) Level 2: α j = α 0 + Σ m ζ m SI mj + Σ n η n ST nj + υ j , β j = β 0 + Σ m π m DSI mj + Σ n φ n DST nj + μ j ,
or, combining the equations for levels 1 and 2, (4
) Y ji = α 0 + Σ m ζ m SI mj + Σ n η n ST nj + β 0 P ji + Σ m π m DSI mj P ji + Σ n φ n DST nj P ji + Σ k δ k CC kji + Σ k γ k CC kji P ji + [υ j + μ j P ji ],
where: In estimating the parameters of this model, we assume that the office error terms μ j and υ j are correlated with each other and are realizations from a bivariate normal distribution with mean 0 and 2x2 variance matrix Σ. Estimation is performed using maximum likelihood.
θ ji = exp(Y ji ), Y ji =
In all cases, the estimated variances of the error terms are statistically significant and the correlation coefficients of the error terms are negative and statistically significant (full results
are available from the authors on request).
---
Interpreting the Results
ERA was designed as a randomized control trial and, since randomization was at the level of the individual, office-level impacts estimates are also experimental. However, the analysis in this paper uses non-experimental techniques in order to examine the factors that appear to influence program effectiveness. In view of this, it is appropriate to consider the extent to which the estimation results can be viewed as capturing causal relationships rather than mere associations.
There are two key issues that need to be considered in assessing the causal validity of the results. The first is that the characteristics of individuals may vary across offices in a way that is related to impact. It was explicit in the design of the ERA evaluation that the pilot areas should represent a broad variety of individuals and local economies. We might expect (and indeed our later results confirm this to be the case) that there will be variation across individuals in the effectiveness of ERA. The concern then is that the office-level variation in program effectiveness reflects compositional and other differences across offices. Our analysis controls for the effect of observed individual characteristics on both outcomes (equation 2) and impacts (equation 3). Likewise, we control for variations in area deprivation. There may, of course, be other influences that we do not observe and so cannot be controlled for. Our model assumes that unobserved office-level influences on outcomes are captured by the random error term for control group outcomes (υ j in equation 3).
Similarly, unobserved office-level influences on impacts are captured by the random error term for program impact (μ j in equation 3). Our model implicitly assumes that, after allowing for the impacts to vary with individual characteristics, the level of local deprivation and unobserved office-level factors, there is no further variation in program effectiveness across subgroups defined by other unobserved characteristics. Given the rich nature of the individual characteristics included in the model and the narrowly defined criteria for inclusion in the experiment (lone parents looking for help re-entering the labor market), this seems a reasonable assumption.
The second concern is that the type of service provided by an office may be endogenous in the sense that it is influenced by characteristics of the individual welfare recipients, local labor market conditions, or other factors that are unobserved. In addition to controlling directly for individual characteristics in the model, the office-level measures of service delivery are constructed in a way that controls for the characteristics of the individuals within that office. This is explained in detail in Section 5.2 (see equation 5) and goes some way towards addressing the potential endogeneity of service type. However, the possibility remains that there are unobserved characteristics that influence both office-level impacts and the implementation strategy adopted by an office. To gain some insight into this, we draw on the qualitative analysis carried out in the course of evaluating ERA and summarized in Hendra et al., (2011). This analysis found little evidence that offices chose strategies to fit around the particular characteristics of the individual welfare recipients. Instead, the intention was very much to deliver a standardized treatment across offices. To achieve this, each district had assigned to it a "Technical Adviser" whose role was to work with caseworkers in that district's offices to ensure that randomization ran smoothly and to advise on delivering in-work support. Furthermore, four of the six districts adopted a centralized approach, thereby limiting the scope for offices to choose their implementation strategies.
Other factors do appear to have played a role. Staff shortages were a problem in some areas.
In other areas, changes to management policy that were unrelated to ERA had an impact on delivery. For instance, district reorganization meant that some offices were reassigned to a new district, with consequent disruption to delivery, particularly when new district managers did not embrace the ethos of ERA. Overall, the qualitative evidence indicates that variation across offices in the type of support provided is most likely due to exogenous factors.
The strongest basis for achieving causal impact estimates would be if individuals were randomly assigned to offices. This was not feasible, particularly given the large distances between the offices, so we rely instead on a non-experimental approach. However, as with any non-experimental study, there is the possibility that one or more important variables have been omitted. In the discussion of the results, we use causal language, but the reader should remember that those causal statements are only valid when the assumptions of the model are satisfied.
---
Data
To estimate the parameters of equation ( 4), two kinds of data are required. First, there are the variables measured at the individual level (the outcomes, Y, and the individual characteristics, CC). Second, there are variables measured at the office level (service intensity, SI, and service type, ST). Office variables used in the analysis were derived from staffing forms and the personal interviews conducted during the follow-up period.
---
Outcomes
One of the main objectives of the ERA demonstration was employment retention (and hence, a reduction in time spent on welfare). Therefore, the outcomes we examine in this paper are the number of months on welfare and the number of months employed during the five year follow-up period (roughly 2005 to 2009), distinguishing between the in-program and post-program periods. All outcomes were taken from administrative recordsthe DWP's Work and Pensions Longitudinal Study (WPLS) database. Information on welfare receipt and employment status is available on a monthly basis. 18 The WPLS contains an identifier that can be used to link to the individuals in the experimental sample. The advantage of this relative to survey data is that there is no attrition in the dataset. 18 For further details on the data sources, see Hendra et al. (2011). Ideally, we would have also considered examining earnings as an outcome. However, both the earnings and log-earnings distributions were highly non-normal implying that a linear specification was not appropriate (indeed, efforts to attempt such a model gave unstable results). We present some alternative estimates of how earnings impacts varied with office characteristics in section 7.
---
Individual Characteristics
Individual-level background characteristics were collected as part of the randomization process. Because they were recorded prior to randomization, these background characteristics are exogenous and thus can be included as regressors in the multi- 1. 20 Loosely, A-level qualifications are those typically gained at age 18 while O-level qualification were usually were gained at age 16. "A-level" is used as shorthand for "A-level or higher" and so includes the most highly qualified individuals. 21 The measure we used is the "Index of Multiple Deprivation," produced by the UK Office of National Statistics. Distinct dimensions of deprivation such as income, employment, education and health are measured and then combined, using appropriate weights, to provide an overall measure of multiple deprivation for each area. Specifically, the areas are "Super Output Areas.". For details, see http://www.neighbourhood.statistics.gov.uk/dissemination/Info.do?page=aboutneighbourhood/geograp hy/superoutputareas/soa-intro.htm.
level
As will be discussed below, to facilitate interpretation of the estimated coefficients, all individual characteristics were grand-mean-centered (expressed as deviations from the overall mean). In addition, the estimated coefficients from the Poisson model were expressed in monthly equivalents by multiplying the incidence rate ratios minus one by the control group means (that is, percentage effect of each variable times the control group mean for that variable).
Table 1 presents means of the individual characteristics and outcomes used in our analysis, along with their cross-office range. As this table indicates, the sample overwhelmingly comprises female lone parents with generally low levels of educational qualifications. About one-half of these mothers have only one child and in about half of all cases the child is under the age of 6 years. More than 70 percent of the sample did not work in the year prior to random assignment and they received welfare for an average of 17 of the 24 months preceding random assignment.
The median deprivation index in our sample is 27.4, which corresponds to approximately the 71 st percentile of deprivation across England. 22 Thus, our sample is somewhat overrepresented by individuals living in relatively disadvantaged areas.
There was considerable inter-office variation in many of the characteristics, including marital status, educational qualifications, number and ages of children, prior work status, age and ethnicity of the individual, and the level of multiple deprivation in the community served by the office.
The average individual in our sample spent about 26 months on welfare during the follow-up period (about 43 percent of the time) and was employed for roughly the same amount of time. Of the two outcomes, average months on welfare showed the greatest interoffice variation, ranging from 14.4 months to 35.3 months. Average months employed ranged from 20.3 months to 33.4.
---
Office Characteristics
As indicated above, we classify the office variables into service intensity (individual caseload measures) and service type. For the service-type variables, ERA-control differentials are used to explain variation in program impacts. To explain variation in control group outcomes, control-group values of the service-type variables are used.
The caseload measures were constructed from monthly monitoring forms for the first 17 months of the experiment. All other office-level variables were constructed from individuals' responses to survey interviews carried out 12 and 24 months after random assignment. 23 It is likely that the advice and support offered to individuals were influenced to some extent by their own characteristics. However, more relevant to the analysis is a measure of the extent to which the office emphasized particular elements of ERA (i.e., their philosophical approach to helping persons on welfare achieve self-sufficiency), controlling for differences in the caseload composition. Although office implementation philosophy cannot be observed directly from any of the available data sources, we form proxies for them by adjusting the individual survey measures to control for observable individual characteristics across offices that may have influenced the type of service implemented using the following regression model: 24
(5) 23 For details on the individual surveys, see Dorsett et al. (2007) and Riccio et al. (2008). 24 Overall, the adjusted office implementation measures are correlated to some extent with each other (meaning that offices that rank high on one measure have some tendency to rank high on the other), but the correlations are modest at most. Thus, we are able to treat these office implementation measures as separate variables in the statistical analysis.
F i = λ 0 + Σ k λ 1k O ki + Σ k λ 2k O ki P i + Σ l λ 3l CC li + e ji ,
for control group members in office k, while the corresponding mean value of F for program group members is λ 1k + λ 2k . The program-control differential is λ 2k .
As noted above, the motivation for constructing office-level measures in this way is that it isolates the tendency for offices to vary in the degree to which they emphasize particular aspects of delivery after controlling for the fact that this is driven in part by the between-office variation in caseload composition. A simpler approach would be to use unadjusted measures and rely on the inclusion of individual characteristic variables in the level 1 regression (equation 1) to control for variations across office practices that stem from compositional differences. However, this simpler approach cannot achieve that aim since individual characteristics in the level 1 regression help explain only the variation in the level 1 outcome, not the variation in service intensity or type. A drawback to our approach is that, by subsequently including the λ 1k and λ 2k terms as regressors in the multilevel model, no account is taken of the fact that they are estimates and subject to error. While this may introduce a specification bias, data limitations prevent us from adopting a better approach.
The specific office variables used in this study are as follows:25 and the control groups were interacted with the program group dummy variable (P ji ) and included in the level 2 equation determining β j (the program impact).
---
Summary Statistics for the Office Variables
Table 2 present the means and the cross-office range of the (regression-adjusted) office variables used in the multi-level analysis. The caseload averages about 29 individuals per adviser and about 42 percent of these advisers, on average, work with ERA participants.
There is significant variation in the caseload across offices (from about 3 individuals per adviser to 110 individuals per adviser) and in the proportion of advisers working with ERA participants (from about 20 percent to 94 percent).
For each of the service type measures, Table 2 presents the mean proportion for the control (NDLP) group, the mean proportion for the program (ERA) group, and the mean ERA-control group difference in the proportion. The first and third of these (control group value and ERA-control group difference) are used as variables in the multi-level model. The second (ERA value) is not directly included in the multi-level model (except for the retention bonus awareness variable) and is shown for informational purposes only.
On average, for every service type, the ERA group had a higher proportion receiving that service than the control group. This is as would be expected, however the differential is not always that great. In some offices, a greater proportion of the control group received the services, as reflected in the negative minimum values of the differential in the cross-office ranges. 27 In no office were less than three-quarters of the ERA participants aware of the retention bonuses and in some offices all of the ERA participants surveyed were aware of the bonuses.
The considerable amount of services received by control group members may have contributed to the fact that there were few significant overall impacts in the ERA evaluation and highlights the importance of the type of model presented in this paper that attempts to 27 Specifically, there were 4 offices in which the proportion of individuals advised to think long-term was higher among the control group than the program group; 7 offices where the proportion of individuals receiving help finding an education or training course was higher; 6 offices where the proportion receiving help with in-work advancement was higher; and 7 offices where the proportion receiving support while working was higher.
control for possible substitution bias in estimating impacts of particular program features across offices. As will be indicated later in section 6.5, by empirically taking into account the possibility of substitution bias, the estimated coefficients on the office-level program-control group differences represent the impacts assuming no substitution bias (that is the impact assuming all program group members receive the particular feature in question and no control group members receive it). We describe how these coefficients need to be interpreted to reflect the actual substitution biases present in the data.
Table 3 presents a correlation matrix of the office variables for the control group and the ERA program group. For both groups, the correlations between the non-caseload variables are all positive, suggesting that retention and advancement services were being delivered together, although not perfectly. For the ERA group these positive correlations are consistent with the goals of the demonstration. From a statistical standpoint, the fact that the correlations are modest implies that it is theoretically possible to estimate the contribution of each element separately.
---
Results
We present the results of estimating the multi-level Poisson model in Tables 4567. As was done for the empirical Bayes estimates in Figures 1A and1B, we present separate estimates for the in-program period (years 1 to 3) and the post-program period (years 4 and 5). Recall that the Poisson coefficients are presented in monthly terms to facilitate interpretation of the results.
Table 4 shows the effects of the individual characteristics on the five-year control group outcomes. Table 5 shows how these individual characteristics affect the program impact (subgroup impacts). Table 6 shows how the office characteristics affect the control group outcomes and Table 7 shows how the office characteristics affect the program impacts. 1). Thus, for example, the coefficient of 2.62 for individuals with A-level qualification on months employed in years 1-3 is their additional months employed compared to individuals with no qualifications.
---
Effects of Individual Characteristics on Outcomes
The average control group member spent 17.9 months on welfare and 13.8 months employed during the in-program period and 7.3 months on welfare and 10.2 months employed during the post-program period. As would be expected, many of the individual characteristics are significantly related to the outcomes in both periods. Individuals who are younger (below age 30), less educated (qualifications below O-level), have less previous work experience (worked 12 or fewer months in the past three years), are non-white, and live in more deprived areas spent longer periods of time on welfare and had less time employed than their counterparts (who are aged at least 30, qualified at O-level or higher, worked more than 12 months in the three years before random assignment, white, and living in less deprived areas). Individuals who were not previously partnered also spent more time on welfare than those who were previously partnered, but did not spend less time employed during the in-program period, although they spent less time employed during the postprogram period.
Interestingly, time spent on welfare declines systematically during the in-program period according to the calendar time of random assignment (the later the time of random assignment, the fewer the months spent on welfare). At first sight, this seems somewhat surprising given the onset of recession in the second quarter of 2008 will have affected labor market outcomes of those randomized earlier less than those randomized later. However, there are two countervailing factors. First, a feature of the recent recession is that, up until the second quarter of 2010 (the latest period for which outcomes are considered in this analysis)
the reduction in the overall employment rate was driven almost entirely by the fall in the proportion of men in work. As we have already seen, the NDLP group is predominantly female and women's employment remained comparatively stable. Second, policy developments in the UK have increased the conditions placed on lone parents. For example, those in receipt of welfare have had to attend an increasing number of work-focused interviews and, since 2005, to agree an action plan with their adviser to prepare themselves for work (Finn and Gloster, 2010). As another example, since 2008, lone parents with a youngest child aged 12 or over are no longer entitled to welfare solely on the grounds of being a lone parent (DWP, 2007). Those randomly assigned more recently will have been subject to the new regulations for a greater proportion of their follow-up period than those randomly assigned earlier. 28
---
Effects of Individual Characteristics on Program Impacts
Table 5 presents the effects of the individual characteristics on program impacts over the three year in-program and two-year post-program periods. For comparison purposes, the grand mean impact of ERA (β 0 ) is included in the table. The coefficients represent deviations from the impacts of the omitted reference groups (see Table 1). Thus, for example, the coefficient of 2.66 for individuals with A-level qualification on months employed during the in-program period is their additional impact compared to individuals with no qualifications.
Note that the impacts for individuals in the reference groups (those with no qualifications in this example) are not shown in Table 5. All that the table shows are deviations in impacts from the reference group, and not the impacts themselves for either group.
The average response to ERA (the grand mean impact in Table 5) is statistically significant for both outcomes during the in-program period, but is not statistically significant during the post-program period. During the in-program period, months on welfare declined by about one and a half months (8.5 percent) and months employed increased by about threequarters of a month (5.5 percent).
Several of the impacts vary significantly across subgroups. One notable finding has to do with educational qualifications. It appears that individuals with O-and A-level qualifications had stronger responses to ERA over the full five-year follow-up period than 28 A fuller discussion of policy developments in the UK during the years ERA was conducted is presented in Hendra et al. (2011).
individuals with no qualifications. They had larger reductions in the number of months on welfare, and larger increases in the number of months employed than individuals with no qualifications. Another notable result is that during the in-program period, ERA seems to have had its biggest impacts on individuals who had the least amount of employment during the three years prior to random assignment. Specifically, months on welfare fell by more and months employed increased by more for individuals who had been employed for a year or less in the three years prior to random assignment. These impacts did not carry over into the postprogram period-in fact, months on welfare actually rose for these individuals during the post-program period. Still another notable result is that the impacts on months receiving welfare and months employed seem to have varied with the degree of local area deprivation, particularly during the in-program period. Specifically, ERA participants living in more deprived areas had larger reductions in months on welfare and larger increases in months employed than ERA participants living in less deprived areas. Thus, ERA appears to have been more effective in more deprived areas. Finally, ERA seems to have caused larger reductions in months on welfare and greater increases in months employed for older individuals (aged 30 years and above) and minority individuals.
---
Effects of Office Characteristics on Office Control Group Outcomes
Table 6 shows how the office characteristics affect office control group outcomes. In other words, the results in Table 6 provide an indication of whether office characteristics are systematically related to office outcomes for standard NDLP participants. For comparison purposes, the grand mean control group outcome (α 0 ) is also shown. In addition to presenting the coefficient estimates, we also present the interquartile range of the outcome across offices.
The interquartile range is the predicted outcome from the 25 th percentile of the office characteristic to the 75 th percentile. The interquartile range provides an indication of how the control group outcome varies across offices possessing the middle 50 percent range of values of a particular characteristic.
As The results also suggest that in offices where all NDLP recipients receive help in finding education courses, the amount of time spent on welfare is increased by 11 months during the in-program period and 6 months during the post-program period and the amount of time spent in work is reduced by 6 months during the in-program period nd 3 months during the post-program period relative to in offices where no recipients receive such help. These are sizeable effects. While they imply greater dependence on welfare during the five-year follow-up period, they may imply greater self-sufficiency in the long-run if the education eventually leads to upgraded skills and higher employment and earnings. However, inspection of Table 6 reveals that the effects of prolonging welfare and reducing employment are stronger during the in-program period and gradually weaken after that. As was the case for the effects of caseload size, the variation across offices in the proportion of recipients receiving help in finding education courses is not great.
In contrast to education services, in offices where all recipients receive help with inwork advancement, there is a statistically significant effect on months employed during the in-program period, but not during the post-program period nor on months receiving welfare at any time during the full five-year follow-up period. The employment effect is sizeable, but doesn't vary much across offices.
Finally, in offices where individuals receive support while working, months on welfare decline and months employed increase during both the in-program and post-program periods, although the post-program effect on welfare is not statistically significant, but as in the case of help with in-work advancement, there is little variation in this effect across offices.
Taken together, these results suggest that certain services matter for traditional NDLP recipients, particularly those that target education and employment activities. However, those that target education tended to prolong welfare receipt and delay employment during the fiveyear follow-up period while those that target employment tended to have the opposite effect, reducing time spent on welfare and increasing time employed during the five-year follow-up period. Because we do not have data beyond the five-year follow-up period, we are unable to determine whether the additional education help received during the five-year follow-up period eventually leads to lower receipt of welfare and greater employment over the longer run.
---
Effects of Office Characteristics on ERA Program Impacts
Table 7 shows how the office characteristics are related to ERA program impacts.
Recall that these results are based on a non-experimental analysis and can only be given a causal interpretation if the assumptions of the model are satisfied. Also recall that for the ERA input types available to control group members (advice for thinking long-term, help in finding education courses, help with in-work advancement, and support while working), the office characteristics included in the multi-level model are measured as differences in the proportions receiving such services between the ERA program group and the control group (see Table 2). The other two office characteristics included in the multi-level model (the proportion of advisers working with ERA participants and the proportion of ERA participants aware of the employment retention bonus), apply only to ERA program group members and, hence, are simply measured as the proportion for ERA program group members.
As indicated in Table 7, there are statistically significant impacts of ERA on welfare receipt and employment during the in-program period, but not during the post-program period. During the in-program period, welfare receipt is reduced by 1.5 months (an 8 percent impact) and employment is increased by .8 months (a 6 percent impact).
During the in-program period, five of the six office characteristics are estimated to be significantly related to ERA program impacts. First, in offices where all of the advisers were working with ERA participants, the average program group member spent 3 fewer months on welfare, but was not employed significantly longer, than in offices where no advisers were working with ERA participants. To put it another way, an individual in an office with a 10 percentage point higher proportion of advisers working with ERA participants will have .3 fewer months on welfare than an individual in an office where the same proportion of advisers worked with ERA participants and control group members (NDLP recipients). The information on interquartile ranges is very important because, in practice, few of the programcontrol group differences in receiving this kind of help were very large, so the effect translates to only about a .6 month interquartile range across offices in the impact of the advisers on welfare receipt.
Second, in offices where all ERA participants were given help finding education courses but control group members were not, the average program group member spent almost 4 more months receiving welfare and 4 fewer months employed, although the welfare effect is not statistically significant. Again, the information on interquartile ranges is very important because, in practice, few of the differences in receiving this kind of help were very large, so the effect translates to only about a 1.1 month interquartile range across offices in the impact on welfare receipt and about a 1.2 month interquartile range across offices in the impact of this service on months employed.
Third, in offices where all ERA participants received help with in-work advancement, but control group members did not, the average program group member spent almost 8 more months employed but not a statistically significant shorter time on welfare. Again, few of the differences in receiving this kind of help were very large across offices, so the effect translates to only about a 1.2 month interquartile range across offices in the impact of this service on months employed.
Fourth, in offices where all ERA participants received support while working, the average program group member spent 3.5 fewer months on welfare and was employed for 3.2 more months. The interquartile range of impacts was about 1.1 months for welfare and 1.0 months for employment.
Finally, in offices where all ERA participants were aware of the bonus, the coefficient implies that they would have spent 9.4 fewer months on welfare than in offices where no ERA participants were aware of the bonus. There is also a sizeable coefficient of 8.4 months for employment, but it is not statistically significant. In practice, almost all ERA participants were aware of the bonus (no office had fewer than 75 per cent aware), so while the bonus was apparently an important part of the ERA program design, it translated into a moderately small (about 1 month) interquartile range of ERA program impacts across offices.
Virtually all of the services that had a statistically significant impact during the inprogram period retain their statistical significance during the post-program period. The one exception is for the impact of help with in-work advancement on employment which is no longer statistically significant in the post-program period. However, the impact of this service remains positive. For all of the services, as was the case during the in-program period, the interquartile ranges of impacts were modest because of mostly small program-control group differences in receipt of these services.
---
An Alternative Specification to Examine Earnings
As indicated earlier, the chief objective of ERA was to encourage employment retention and so our main outcomes of interest were time spent employed and time spent on welfare. However, ERA also aimed to promote advancement in employment. Pay progression is one possible manifestation of advancement, so it is of interest to consider earnings as an outcome.
Unfortunately, as noted in section 5, it was not possible to estimate a multilevel models for earnings. In order to have some sense of how earnings impacts vary with program-control differences in office characteristics, we present in this section supplementary results using a "reduced from" estimation approach, similar to the one used by Somers et al.(2010) in examining how impacts on student grades vary with program implementation conditions in a demonstration of supplemental literary courses for struggling ninth graders.
Methodologically, we use a linear regression model, but cluster the standard errors in order to allow for within-office correlation of errors. This approach implies a simplified version of equation ( 4) as follows:
(1) Y ji = α 0 + β 0 P ji + Σ m π m DSI mj P ji + Σ n φ n DST nj P ji + Σ k δ k CC kji + Σ k γ k CC kji P ji + ε j + u ji .
It is helpful to highlight the differences between this specification and the multilevel model.
First, to control for variations between offices in the level of earnings, an office-specific error term, ε j , has replaced the random effect υ j . A consequence of this is that variables that do not vary within offices cannot be be included, so the Σ m ζ m SI mj and Σ n η n ST nj terms from equation ( 4) are no longer present and therefore variation in control group outcomes with office characterisitics cannot be estimated. Second, this specification does not involve the interaction term μ j P ji . This amounts to an assumption that the the office-level error term in equation ( 4) is zero. In other words, all variation in program impacts is assumed to be explained by the program-control differences in services. Third, an individual-level error term, u ji , has been introduced since we are now estimating a linear regression model rather than a Poisson model.
The results provided by this model are of interest both in themselves and also because they represent a more common estimation approach seen in the literature. We preface them by noting that, for the welfare and employment outcomes, the estimated variances and correlation coefficients of the office-level error terms are statistically significant, so our expectation might be that this would also apply when considering earnings. In view of this, the results in this section may be based on a mis-specified model.
With this caveat in mind, the results are presented in With regard to the overall impact of ERA, this was statistically significant in 2005/6, increasing annual earnings by an estimated £309. There was no significant impact in later years. This is consistent with the welfare and employment impacts which showed significant impacts during the in-program period but not the post-program period. Under this specification of the model there is no variation in program impacts other than that associated with program-control differences in services. Consequently, Table 8 does not report an interquartile range around the grand mean impact.
Program impacts did not vary with the proportion of advisors working with ERA participants except in 2008/9, where the reported positive coefficient translates into an interquartile range of nearly £300 across offices in the impact of advisers. This is consistent with the reported results for time spent on welfare, which also showed a variation that became more statistically significant in the post-program period.Higher earnings impacts in 2005/6
were also seen in offices where the proportion of ERA participants advised to think long-term was higher. The interquartile range in this case was just over £500. However, this variation was not statistically significant in later years. For welfare and employment outcomes, there was no significant variation in any year.
There is evidence that the earnings impacts were lower in offices that provided more help with finding education courses. This was consistent across all years, although only statistically significant in 2005/6 and (especially) 2008/9. The interquartile range in 2008/9 is £769. It is perhaps of some concern that these longer-term outcomes are not suggestive of emphasis on education being rewarded with positive returns. It is of course possible that this finding could be reversed with even longer-term outcomes. We note that these results are consistent with those reported for employment impacts.
Emphasizing help with in-work advancement on the other hand is associated with stronger earnings impacts in all years. Beginning 2006/7, these variations are statistically significant, and the interquartile range is quite stable at £375, £511 and £461 in this and the successive two years respectively. The employment impacts showed similar variation during the in-program period but not during the post-program period.
There was no significant impact variation in any year with the proportion of ERA participants receiving support while working. This is in contrast to the welfare and employment impacts, for both of which this appeared to be a key factor along which impacts varied. Nor was there any variation associated with awareness of the retention bonus, something that had been shown to correlated with program effectiveness when considering exits from welfare. However, the bonus awareness coefficients are positive and large for all four years and in three of the years the coefficients are not too far from being statistically significant.
Overall, this summary of the earnings results has revealed a general consistency with the welfare and employment outcomes, but there also some differences. The reasons for the differences are not clear but could simply be the result of the different estimation techniques followed. In view of this, and of our preference for the multi-level specification, we do not attempt to interpret these differences.
---
Conclusions and Policy Implications
For out of work lone parents, the ERA demonstration had statistically significant impacts on welfare receipt and employment during the in-program period (years 1 to 3) and these impacts varied significantly across the offices that participated in the demonstration
The main purpose of this study has been to examine how program impacts varied with differences across the offices in the way the ERA program was implemented. Secondary objectives of this study have been to determine whether office characteristics can help explain cross-office variation in the control environment (under the standard NDLP program) and whether the impacts of ERA vary with certain personal characteristics of the ERA participants (subgroup impacts).
In interpreting the results of this study, it is important to understand that while certain office characteristics may be quite important in explaining outcomes and impacts, lack of variation in these characteristics across offices may lead to only a small estimated variation in these outcomes and impacts across offices. Thus, for example, while our results indicate the importance of conveying information about the financial rewards available to lone parent ERA participants who maintain employment (given by the estimated coefficients in Table 7), there was not much variation in the actual conveying of this information across offices, so it is associated with only modest variation in program impacts across offices.
Our results indicate that ERA was especially effective at reducing welfare receipt and increasing employment for lone parents with O-and A-level qualifications, those living in more deprived areas, and those aged 30 or over. Subgroup variation was not, though, the primary focus of this analysis. Our main results concern impact variation with office characteristics. Several such characteristics were found to be related to the control environment (outcomes of control group members under the standard NDLP program).
Offices with higher adviser caseloads had control group lone parents that spent more months on welfare and fewer months employed over the five-year follow-up period. The results of this study are also interesting in another regard. While the overall impact of ERA on welfare and employment 4 to 5 years post-randomization was not statistically significant (see Table 7), we find that this masks significant variation of impacts across offices, some being positive and some negative. This suggests that, in addition to focusing on overall impacts, which is typically done in employment and training demonstrations such as the one examined here, policy evaluation should, where possible, pay attention to implementation procedures across offices where the program is being conducted.
Rather than concluding a policy to be ineffective, the type of approach presented in this paper may offer a means of learning from those with positive impacts in order to refine policy and, in time, raise overall effectiveness.
Although we were unable to estimate a mult-level model of earnings due to statistical convergence problems, we were able to estimate a simpler, more restrictive, earnings model that has been used in other studies to examine variation in program impacts with program implementation practices. The earnings model estimates are roughly consistent with the multi-level welfare and employment models, but there are also some differences, primarily in statistical significance rather than direction of effects.
In conclusion, it is relevant to mention that, as with any long-term study, the economic and policy environment changes. Most obviously, the results relate to a period marked by severe recession and associated increases in unemployment. Equally relevant though is the fact that the last few years have seen a number of policies introduced that directly affect lone parents in the UK. Lone parents have been increasingly required to attend work-focused interviews and those with a youngest child aged 7 or over now have to actively seek work. Furthermore, In-Work Credit was introduced in 2008, providing weekly subsidies to lone parents entering work of 16 or more hours per week. The effect of such policy developments is to reduce the contrast between the service available to the ERA group and that available to the control group and has an important bearing on how to view the overall effect of ERA. However, despite these policy changes and despite that fact that our analysis is non-experimental, we have obtained plausible results identifying those particular implementation features that tended to be linked to stronger impacts of ERA. 1). Thus, the coefficient of 1.25 for individuals who were never partnered on months on welfare in years 1 to 3 implies they spent 1.25 months longer on welfare than customers who were previously partnered (not shown in table ). *Significant at 10 percent level; **Significant at 5 percent level; ***Significant at 1 percent level 1). Thus, the coefficient of 0.38 for individuals who were never partnered on months on welfare in years 1 to 3 is their additional impact compared to customers who were previously partnered (not shown in table ). *Significant at 10 percent level; **Significant at 5 percent level; ***Significant at 1 percent level
---
Interquartile range is the predicted outcome from the 25th percentile of the office characteristic to the 75th percentile. *Significant at 10 percent level; **Significant at 5 percent level; ***Significant at 1 percent level Interquartile range is the predicted impact from the 25th percentile of the office characteristic to the 75th percentile. *Significant at 10 percent level; **Significant at 5 percent level; ***Significant at 1 percent level
---
Coefficient
Across Offices Interquartile range is the predicted impact from the 25th percentile of the office characteristic to the 75th percentile. *Significant at 10 percent level; **Significant at 5 percent level; ***Significant at 1 percent level
---
Coefficient
Across Offices | 66,947 | 1,367 |
a4319b605434b29f841aaded6b2421fe45374950 | Setting global research priorities for child protection in humanitarian action: Results from an adapted CHNRI exercise | 2,018 | [
"JournalArticle"
] | Armed conflict, natural disaster, and forced displacement affect millions of children each year. Such humanitarian crises increase the risk of family separation, erode existing support networks, and often result in economic loss, increasing children's vulnerability to violence, exploitation, neglect, and abuse. Research is needed to understand these risks and vulnerabilities and guide donor investment towards the most effective interventions for improving the well-being of children in humanitarian contexts.The Assessment, Measurement & Evidence (AME) Working Group of the Alliance for Child Protection in Humanitarian Action (ACPHA) identified experts to participate in a research priority setting exercise adapted from the Child Health and Nutrition Research Initiative (CHNRI). Experts individually identified key areas for research investment which were subsequently ranked by participants using a Likert scale. Research Priority Scores (RPS) and Average Expert Agreement (AEA) were calculated for each identified research topic, the top fifteen of which are presented within this paper.Intervention research, which aims to rigorously evaluate the effectiveness of standard child protection activities in humanitarian settings, ranked highly. Child labor was a key area of sector research with two of the top ten priorities examining the practice. Respondents also prioritized research efforts to understand how best to bridge humanitarian and development efforts for child protection as well as identifying most effective way to build the capacity of local systems in order to sustain child protection gains after a crisis. | Introduction
The number of people affected by humanitarian crises is on the rise, perpetuated by armed conflict and natural disasters [1]. In 2017, there were over 65 million forcibly displaced people, over half of whom were under the age of 18 [2]. In addition, over one billion children live in countries affected by armed conflict [3]. Environmental factors, including climate change, are likely to increase the number of conflicts and intensify the severity of natural disasters [4][5]. Armed conflicts and large-scale disasters increase the potential for family separation and the erosion of existing support systems, putting children at risk of abuse, exploitation, violence, and neglect. The widespread economic shocks that often accompany humanitarian crises create further vulnerabilities for children when households employ negative coping strategies to manage economic stress. In Lebanon, where over one million Syrian refugees have been registered with the United Nations High Commissioner for Refugees (UNHCR), child marriage and child labor have been reported as families struggle financially [6][7]. Children in circumstances of economic and physical insecurity are also at risk of child trafficking, sexual exploitation, and recruitment by armed forces and extremist groups. Within these contexts, child protection experts in non-governmental organizations (NGO), multilateral institutions such as the UN Children's Fund and the United Nations High Commissioner for Refugees, work to prevent and respond to incidents of abuse, neglect, exploitation, and violence against children. These efforts can take the form of broader systemsstrengthening interventions that seek to build the capacity of national actors to implement effective social support systems that care for children and families, both in formal and informal spheres. As a complement to systems strengthening, child protection initiatives may also take the form of direct implementation, such as the establishment of "Child Friendly Spaces (CFS)" that allow children safe zones to play, parenting trainings that emphasize alternatives to physical punishment, or family tracing and reunification for unaccompanied or separated children. Yet, the assumptions that drive such child protection efforts in humanitarian practice have not yet been fully based on scientific evidence. Protection risks are often estimated and prioritized based on anecdotal accounts [8], definitions of child protection concepts are often not standardized [9], and there is scant evidence on the effectiveness of many of the sector's universally agreed upon standard interventions [10][11][12].
To begin addressing these gaps in empirical research within the sector of child protection in humanitarian contexts, a research priority setting exercise, adapted from the Child Health and Nutrition Research Initiative (CHNRI), was undertaken to identify and rank research priorities. This manuscript presents the process and results of this participatory ranking methodology designed to guide future research investment.
---
Methods
The Child Health and Nutrition Research Initiative (CHNRI) was designed as a tool to help guide policy and investment in global health research, specifically children's health. CHNRI has since been used to establish research priorities across a broad array of global health disciplines [13][14][15][16][17][18][19][20]. The method is comprised of four stages (i) determining the boundaries of investigation and creating evaluation criteria; (ii) obtaining and systematically listing input from key stakeholders on critical priorities/tasks (referred to as "research questions") to address gaps in sectoral evidence or knowledge; (iii) enlisting stakeholders to rank the research questions based on a pre-defined set of evaluation criteria; (iv) calculation of research priority scores and agreement between experts ( Fig 1]. A more detailed explanation of the CHNRI method has been published elsewhere [21][22][23].
The present study was commissioned by the Assessment, Measurement and Evidence Working Group of the Alliance for Child Protecton in Humanitarian Action (ACPHA) and was informed by prior consensus-building efforts in the sector [24][25]. In collaboration with a Lead Researcher, the CHNRI method was adapted to prioritize research topics in the sector of child protection in humanitarian settings. For the purposes of this exercise, a 'humanitarian setting' was defined as "acute or chronic situations of conflict, war or civil disturbance, natural disaster, food insecurity or other crises that affect large civilian populations and result in significant excess mortality" [26]. The goal of 'child protection' efforts are "to protect children from abuse, neglect, exploitation, and violence" [27]. And 'children' were defined as "individuals under the age of 18" [28].
Experts working on issues of child protection in humanitarian settings were then invited to take part in semi-structured interviews to discuss the gaps in knowledge and evidence that existed within the sector and to generate research priorities to address these gaps. Forty-seven experts participated in this first round of evidence generation with representatives from Non-Governmental Organizations (NGOs), United Nations (UN) agencies, donor agencies, and research institutions. Experts were initially identified through three coordination bodies-the Alliance for Child Protection in Humanitarian Action (ACPHA), the Child Protection Area of Responsibility (CP AoR), and UNHCR with the network extended through snowball sampling. Respondents were strategically diversified to include inputs from those involved in various child protection job functions including implementation, coordination, policy development, and academia from a range of geographic locations (Table 1]. Recruitment continued on a rolling basis and ended once data saturation, defined as the point at which no new data were being generated, was achieved. The final sample was consistent with previous research that identified [45][46][47][48][49][50][51][52][53][54][55] as the number of experts at which collective opinion stabilizes [29].
Aligned with prior CHNRI studies in humanitarian contexts [14], interviews were held via Skype with experts notified in advance that they would be requested to provide their opinions on the most important areas for investment to improve the state of evidence in the field of child protection in humanitarian settings in the next 3-5 years. Participants were encouraged to follow up by email in the event they were able to generate further ideas after the interview had concluded.
Through an iterative process, the Lead Researcher then collated 24 hours of interview notes to identify 90 unique research priorities, condensing interrelated research ideas and simplifying concepts for use in the ranking exercise. The priorities were then thematically organized into the following pre-determined themes-Epidemiological Research; Policy and Systems Research; and Intervention Research (Table 2]. The research team provided review and consensus on the themes and categorization after which the areas for research were listed within the online survey. The survey was pilot tested by individuals who were not involved in the development of research questions but who had general knowledge of humanitarian concepts and survey design. Further, to ensure that question order did not bias results, we implemented a page randomization that shuffled page order within the survey for each new respondent. Experts who participated in the interview process were invited to take part in the online ranking portion of the prioritizatio exercise. Two additional experts who were either not previously available or who reached out to participate after the period for interviews had passed, were also invited to take part in survey.
Each of the 90 research priorities were ranked on four criteria: (i) Relevance-research will support learning that contributes to the prevention and response to abuse, neglect, exploitation, or violence in humanitarian settings; (ii) Feasibility-research is feasible to conduct in an ethical way; (iii) Originality-research will generate new findings or methods; and (iv) Applicability-research will be readily applied to programs and policies. Relative weights were not assigned to scoring criteria. For each research question, participants were offered six possible responses: strongly agree (5 points); agree (4 points); undecided (3 points); disagree (2 points); strong disagree (1 point); and insufficiently informed (considered non-applicable/no response). The scoring matrix was a deviation from past CHNRI studies which typically offered four possible responses-yes (1 point), no (0 points), undecided (0.5 points), and insufficiently informed/no response. In the development of the present research design, the study team elected to use a full Likert scale to allow for greater granularity when analyzing scores.
Aligned with the CHNRI methodology [13][14][15][16][17][18][19][20], every research question was provided a priority score under each of the four judging criterion, calculated by taking the point totals and dividing them by the maximum number of points available, after excluding from the denominator those who did not answer the question or reported they were insufficiently informed, a percentage was calculated [14]. For each question, the overall Research Priority Score (RPS) was then calculated by taking the mean of the total priority scores for each judging criterion, as calculated above. Research questions were then ranked from highest to lowest on overall priority scores and the top fifteen presented in Table 3. Standard deviations for RPS are also included to show the variation between total priority scores for each judging criterion (Table 3, S1 Annex).
---
Research Type Abbreviation Number of Items
Basic epidemiological and social science research which aims to define the incidence or prevalence of abuse, exploitation, or violence against children or identify the underlying risk factors associated with violations against children.
---
EPI 22
Sub-Theme #1 -Measuring the incidence or prevalence of child protection concerns in humanitarian settings In addition, the Average Expert Agreement (AEA) was calculated for each research question. In order to obtain AEA values, we consolidated "strongly agree" and "agree" as well as "strongly disagree" and "disagree". For each judging criterion, the number of modal responses was then divided by the total number of scorers for that question, again excluding those who did not answer the question or who reported they were insufficiently informed on the research question being assessed. Following this calculation, the ratios were then summed and divided by the number of judging criteria.
Both RPS and AEA were calculated for the entire group of respondents as well as for subgroups, in order to analyze differences in priorities for those located in field settings as compared to those based in non-operational settings. Data were analyzed using Microsoft Excel.
---
Ethics statement
Formal ethics review is usually not requested for undertaking CHNRI exercises [13][14][15][16][17][18][19][20] as the exercise does not involve personal or otherwise sensitive data. Participants were solicited via established professional networks whose purpose is to facilitate and enable information-sharing. Prior to participation in initial Skype interviews, all participants were informed on the nature of the research and the anonymity of their feedback.
---
Results
Of the 49 respondents invited to take part in the online ranking, 41 experts participated, eliciting a response rate of 83.7 percent. Research questions from all three of the research domains (epidemiological research; policy and systems research; and intervention research) featured in the top 15 research priorities. Intervention research was the most predominant domain voted upon by experts with 8 of the top 15 priorities identified falling within this realm. Policy and systems research followed with 5 priorities and epidemiologic research with only 2 featured priorities ranking in the top 15 (Table 3). The range of overall RPS was 63.28 to 86.33, with the highest ranked priority being the rigorous evaluation of the effectiveness of cash-based social safety nets to improve child wellbeing. Within the top 15 priorities, RPS ranged from 80.70 to 86. 33. Intervention research which aims to rigorously evaluate the effectiveness of standard child protection activities provided in humanitarian settings ranked highly. Two questions concerning child labor, specifically estimating the prevalence and understanding the effectiveness of interventions to reduce the practice, ranked in the top ten priorities. Respondents also prioritized research efforts to understand how best to mobilize local systems, including the local social service workforce and para-social work models, in order to sustain child protection gains after international actors have departed a crisis.
AEA scores ranged from 41.55 to 85.63, representing the percentage of respondents who provided the same score on a research priority (averaged across four judging criteria). For the top 15 research investment options, AEA ranged from 69.04 (to build the capacity of child protection sector staff in empirical research design and data analysis planning) to 85.75 (to evaluate the effectiveness of interventions to reduce child labor) (Table 3). We found higher levels of respondent agreement among research questions with higher RPS rankings, demonstrating that a certain level of consensus was attained in order for research topics to be prioritized in the higher ranks (Fig 2).
Standard deviations (SD) were also analyzed in order to assess variation between the judging criterion. Among the top 15 reserch priorities, SDs ranged between 2.5 and 5.4 with the exception of the evaluation of psychosocial programming with an SD of 8.2 due to the comparatively lower score provided on Originality. This is likely due to the recent work on this particular topic that has been widely circulated [30] and therefore was deemed less original in the ranking process.
When comparing all RPS scores among respondents who resided within an operational setting versus those who did not, there was a correlation co-efficient of 0.32, indicating a weak but positive association. The top ten research priorities differed between the two groups (Table 4). With the exception of rigorously evaluating family strengthening programs, which ranked highly for both groups of respondents, there were no other priorities that jointly ranked among the top ten. For field-based respondents, the most important initiative was to identify best practices for bridging humanitarian and development initiatives for child protection system strengthening. Field-based respondents tended towards the identification of best practices while also prioritizing capacity building for child protection sector staff in empirical research design and data analysis planning. In contrast, respondents who were not based in operational settings showed greater enthusiasm for the rigorous evaluation of interventions, with an examination of the effects of cash-based social safety nets on child well-being outcomes ranking highest.
---
Discussion
The limitations to rigorous research on child protection in humanitarian crises are notable, with harsh operational conditions, short project cycles, and inadequate funding all considered hindrances to scientific inquiry on child protection within these contexts [31][32]. However, recent efforts have begun to demonstrate that robust social science methodologies within the sector are both needed and possible [33][34][35]. This prioritization exercise, which is among the first known systematic inquiries on research investments for child protection in humanitarian contexts using the CHNRI methodology, offers initial insight on the research interests and evidence needs of sector experts.
Intervention research comprised three of the top four research priorities, aligning with many previous CHNRI studies that have similarly found intervention research to be of importance to stakeholders [36]. As previously noted, there is a dearth of rigorous evaluation to determine the effectiveness of common child protection interventions in humanitarian settings. The lack of quantitative data to document intervention effectiveness inhibits the ability of humanitarian actors to design evidence-based programs, a hindrance increasingly problematic for funding appeals and policy advocacy. This prioritization suggests that understanding intervention effectiveness is of particular interest to the sector, ranging from examinations of family-strengthening to capacity-building interventions to activities aimed at reducing child labor. Because the sample more heavily represents individuals in technical advisory and other operational capacities, the interest in intervention research most visibly highlights the needs of practitioners to have their programming rigorously tested and evaluated with respect to child well-being outcomes.
As the top priority among both intervention research topics and the entire ranking exercise, understanding the effects of cash-based social safety nets on child well-being outcomes has emerged as highly importance for the sector. Cash transfers have gained prominence as multiple studies have found them effective in improving the welfare of children, including through improved health and nutrition outcomes as well as increased educational attainment [37][38][39].
The assumption driving the proliferation of cash-based social safety net interventions in humanitarian contexts is that they are an effective way of mitigating crisis-induced economic shocks, thereby preventing the use of coping strategies that may have negative effects on children such as school drop-out, child labor, and family separation. Yet, these assumptions have not been fully tested within disaster, conflict-affected, or displacement contexts, environments where children face unique risks and vulnerabilities. Further, the majority of existing evidence on the effects of cash transfers do not examine child protection outcomes such as reductions in violence, abuse, and exploitation, information of great interest to sector experts. In addition to understanding the effectiveness of singular child protection interventions on child-well-being outcomes, experts indicated a need to also evaluate multi-sectoral interventions, considering this one of the highest priorities for research. A relatively broad mandate, this methodological research priority underscores the need for study designs that allow for the rigorous evaluation of multiple components within increasingly complex program designs, including analyses on how various components interact with one another. Such research endeavors are inherently more complicated, yet recent guidance from the global health sector has shown this to be a priority that spans disciplines within development and humanitarian assistance [40][41][42].
Similarly, as multi-sectoral and interdisciplinary interventions are prioritized by funders, experts within this study have identified a need to quantitatively demonstrate the added value of child protection interventions when mainstreamed within other sectors, such as health, nutrition, or education. Prior research on the effects of nutrition supplementation and play/stimulation on stunted children in Jamaica provides an example of how social scientists have captured the additive effects of non-sector related interventions [43]. If protection interventions are found to be effective in improving non-protection related outcomes for children, this type of evidence would support an argument that child protection considerations and/or program components are necessary to achieve desired results in other areas of humanitarian relief.
Child labor in humanitarian settings was also a common theme with both intervention effectiveness and prevalence data among the top 10 priorities for research investment. Similar to cash transfers, child labor has been examined across multiple development settings [44][45][46], however, data from humanitarian contexts is extremely sparse and generally limited to anecdotal information. As urban environments have become a more common setting for humanitarian crises, there is an increased risk that children will be used for begging, street vending, and other forms of exploitation [47][48]. There is a need to understand the prevalence, dynamics, and effective interventions to reduce this protection risk for children who have been displaced as well as children from affected host communities.
In order for child protection programming to be more responsive to current humanitarian contexts, experts felt that there was value in 1) better understanding the protection risks of children with disabilities (particularly non-observable disabilities) and 2) translating any existing evidence on implementing humanitarian programs in urban settings into more tangible guidance for CP practitioners. Disability inclusion has gained traction as a critical component within humanitarian assistance, however, experts noted this work to primarily address physical disabilities where programmatic accommodations are often tangible and straightforward, such as the fitting and distribution of assistive devices. In contrast, many experts noted feeling ill-equipped to properly serve children with cognitive and intellectual disabilities, agreeing that an examination of the protection risks for children with disabilities, particularly non-observable disabilities, should be prioritized.
Similarly, experts felt more guidance on child protection programming in urban humanitarian crises would be beneficial. Indeed, as rapid urbanization has resulted in more densely population cities and towns, the potential impacts of a humanitarian crisis increase, particularly in areas with weak infrastructure and insufficient governance [49]. The Syrian refugee crisis has seen over 5 million people flee to neighboring countries, seeking refuge predominately in the cities and towns of Lebanon and Jordan with another 6 million internally displaced within Syria, again primarily in urban and peri-urban settings [50]. This trend differs from past decades of humanitarian assistance that was largely provided within camp-based settings, requiring a new framework for understanding how best to support children in crisis. Other actors within humanitarian response have begun to give this issue greater attention in the past several years [51][52] enabling the identified priority of secondary literature review and as relevant, the translation and integration of evidence into child protection strategies and program design.
Localization and sustainability were also key themes. Within the top 15 research priorities, experts conveyed a need to identify best practices for both engaging the local social service workforce in emergency settings and establishing sustainable para-social work models such that structures will exist past the duration of humanitarian intervention. At the same time, respondents would like to understand best practices for bridging humanitarian and development initiatives for child protection systems strengthening. Taken together, these items demonstrate a desire to understand how best to engage local social service structures (formal and informal) and connect the work done during a crisis to a longer-term development agenda.
When scrutinizing the findings further, three trends emerged. First, among the top 15 research priorities, participants routinely scored research questions much higher for relevance than originality. It is speculated that this score variation may be a result of recent efforts by the sector to discuss and advocate for a more robust evidence base in humanitarian contexts [53][54][55]. The relatively frequent discussion about these evidence needs may have made a number of research questions appear unoriginal to participants yet still highly relevant because the research had yet to be carried out. This finding highlights the readiness of child protection experts to move forward an actionable research agenda for humanitarian settings.
Next, there were notable differences in the priorities of field and non-field based staff with only one research topic ranking within the top ten for both sub-groups (rigorously evaluate the effectiveness of family strengthening interventions to improve child well-being). As compared to non-field based respondents, those residing within an operational setting were less likely to identify rigorous evaluation within their top priorities. Instead, these respondents tended towards the identification of best practices, a logical reaction given that such research would presumably result in straight-forward guidance to program design. At the same time, field-based staff highly ranked capacity building in empirical research design and data analysis planning for the child protection sector, demonstrating a desire to build the skills required to further evidence generation.
Lastly, our study explored research topics within the professional sector of "child protection in humanitarian settings", which had a rather expansive purview. As such, some of the research priorities identified by experts were similarly broad in scope. It is our hope that as the sector progresses in the collection and translation of rigorous evidence that future priority setting exercises on child protection in humanitarian settings will be able to focus on particular needs within narrower sub-specialties.
---
Limitations
The CHNRI method is based on purposive sampling where individuals are invited to participate based on their expertise in a given field. This method relies on a non-representative sample to aggregate knowledge and experiences. The findings are therefore limited to the perceptions of a discrete group of individuals and it is possible that additional areas for research investment may have emerged if a larger sample was recruited though, as earlier noted, prior quantitative work has demonstrated collective opinion to stabilize with as few as 45-55 participants [29], however, this finding was based on binary "yes" or "no" responses as opposed to the Likert scale implemented in this project. Further, given the low-cost and replicability of the procedure, it is attractive to a variety of sectors as a means of fostering transparency and enhancing systematization in the creation of a research agenda.
In our study. non-field based staff were more likely to respond to requests for interviews and as such, had greater representation within the study (Table 1). This created a certain level of bias towards the insights and experiences of child protection experts currently based in non-operational settings. When secondarily analyzing results based on whether respondents resided in operational or non-operational settings, we did find variation in the prioritization of research items (Table 4). These findings indicate that even when saturation appears to have been reached, the rank ordering of priorities can be influenced by the characteristics of the sample.
Deviating from standard CHNRI procedure, we requested that participants rank research priorities against pre-determined criteria using a Likert scale as opposed to binary "yes" or "no" responses. This decision was informed by the lack of existing evidence within the sector of child protection in humanitarian action and the anticipation that a large majority of research items would be affirmatively ranked by respondents, making it difficult to discern which were of highest priority. While Likert scales have been used extensively in other crowdsourcing methods [56][57][58], more research is needed to examine the benefits and drawbacks of using a Likert scale within an adapted CHNRI framework.
Lastly, our study did not include "impact" as a ranking criterion. Such a criterion would have participants rank research based on the likelihood it would result in a reduction of protection risks or improved responses to child protection violations. While our research criterion of "relevance" included similar language, it did not explicitly request input on the ability of a research question, once answered, to impact the lives of children. Further research priority setting exercises on child protection may wish to include "impact" as a ranking criterion separate from "relevance" in order to further ascertain the merit of a research idea.
---
Conclusion
Rigorous, scientific research that assesses the scope of child protection risks, examines the effectiveness of existing child protection interventions, and translates evidence to practice is critical to move the sector forward and respond to donor calls for programming that is evidence-based. This CHNRI adaptation solicited inputs from a range of sector experts with variation across geographic location and job function. It is our hope that findings can guide a global research agenda, facilitating cooperation among donors, implementers, and academics to pursue a coordinated approach to evidence generation.
---
All relevant data are within the paper and its supporting information files.
---
Supporting information
S1 | 29,528 | 1,633 |
a762d3c72f6a58a93d26624c26b5b3d61d826654 | Interventions to reduce tobacco use in people experiencing homelessness. | 2,019 | [
"Review",
"JournalArticle"
] | Interventions to reduce tobacco use in people experiencing homelessness. | B A C K G R O U N D Description of the condition
Tobacco use is disproportionately concentrated among low-income populations, with rates exceeding that of the general population at least two-fold (Jamal 2015). Among low-income populations, such as people experiencing homelessness, estimated smoking prevalence ranges between 60% and 80% (Baggett 2013). Individuals with severe mental health disorders and/or substance use disorders who belong to racial/ethnic minority groups, who are older, or who self-identify as a gender and sexual minority are disproportionately represented in populations experiencing homelessness (Culhane 2013; Fazel 2014). The prevalence of mental health and substance use disorders is high among people experiencing homelessness. A systematic review concluded that the most common mental health disorders among this population were drug (range 5% to 54%) and alcohol dependence (range 8% to 58%), and that the prevalence of psychosis (range 3% to 42%) was as high as that of depression (range 0% to 59%) (Fazel 2008). These populations carry a high burden of tobacco use and tobacco-related morbidity and mortality (Schroeder 2009). Persons experiencing homelessness are three to five times more likely to die prematurely than those who are not homeless (Baggett 2015; Hwang 2009), and tobacco-related chronic diseases are the leading causes of morbidity and mortality among those aged 45 and older (Baggett 2013b). Among younger homeless-experienced adults (< 45 years), the incidence of tobacco-related chronic diseases is three times higher than the incidence in age-matched non-homeless adults (Baggett 2013b). Persons experiencing homelessness have distinctive tobacco use behaviors associated with low income, substance use comorbidities, and housing instability that affect their likelihood of successfully quitting. Epidemiological studies of tobacco use among this population have shown that most adults experiencing homelessness initiate smoking before the age of 16 (Arnsten 2004). Average daily cigarette consumption is between 10 and 13 cigarettes per day, and more than one-third smoke their first cigarette within 30 minutes of waking (Okuyemi 2006; Vijayaraghavan 2015; Vijayaraghavan 2017). People experiencing homelessness have high rates of concurrent use of alternative tobacco products such as little cigars, smokeless tobacco, and e-cigarettes (Baggett 2016; Neisler 2018). They also engage in high-risk smoking practices including exposure compensation when reducing cigarettes smoked per day and smoking cigarette butts (Garner 2013; Vijayaraghavan 2018). Smoking norms include sharing or "bumming" cigarettes, and these practices may reduce the effects of policy interventions such as increased taxes (Garner 2013; Vijayaraghavan 2018). Individuals experiencing homelessness face significant barriers to cessation, including disproportionately high rates of post-traumatic stress disorder (PTSD), which can lead to positive associations with smoking (Baggett 2016a). Smoking cessation is challenging for people who have to navigate the stressors of homelessness (Baggett 2018; Chen 2016), high levels of nicotine dependence, and limited access to smoking cessation treatment and smoke-free living environments (Vijayaraghavan 2016; Vijayaraghavan 2016b). Integrating tobacco dependence treatment into existing services for homeless-experienced adults remains challenging (Vijayaraghavan 2016b). Staff members may not support quit attempts (Apollonio 2005; Garner 2013), and homeless-experienced adults do not have consistent access to services or information technologies used to improve access to cessation interventions (McInnes 2013). Despite these challenges, over 40% of adults experiencing homelessness report making a quit attempt in the past year (Baggett 2013c; Connor 2002). A majority relapse to smoking, with estimates of the quit ratio (i.e. the ratio of former-to-ever smokers) between 9% and 13% compared to 50% in the general population (Baggett 2013c; Vijayaraghavan 2016). Homeless populations have been historically neglected in population-wide tobacco control efforts; however, there has been increasing interest in studying the correlates of tobacco use and cessation behaviors for these populations and in discovering how these individuals may differ from the general population (Goldade 2011; Okuyemi 2013). Typically high levels of nicotine dependence among adults experiencing homelessness are associated with low likelihood of quitting (Vijayaraghavan 2014). Proximity to a shelter during the week after a quit attempt has been associated with higher risk of relapse, thought to occur because of increased exposure to environmental cues to smoking (Businelle 2014; Reitzel 2011). In contrast, staying in a shelter, as opposed to on the street, has been associated with quitting smoking (Vijayaraghavan 2016), possibly due to exposure to shelter-based smoke-free policies. Stud
---
Description of the intervention
Interventions designed to support people to stop smoking can work to motivate people to attempt to stop smoking ("cessation induction"), or to support people who have already decided to stop to achieve abstinence ("aid to cessation"). In this review, we will include both types of interventions. Many people who are homeless face barriers to using regular services, such as healthcare services, through which cessation support is available. The availability of support to assist a quit attempt can itself create motivation to quit (Aveyard 2012). Thus one possible intervention to support people experiencing homelessness is to provide bespoke cessation services that can operate both to make quitting seem more desirable and to provide treatment for those who are attempting to stop smoking. The combination of behavioral counseling and pharmacotherapy (nicotine replacement therapy [NRT], bupropion, or varenicline) is the gold standard for individually tailored smoking cessation treatment in the general population (Stead 2016). However, a vast majority of quit attempts made by people experiencing homelessness are unassisted (Vijayaraghavan 2016). Preference for cessation aids may vary by cigarette consumption, with light smokers (0 to 10 cigarettes per day) preferring counseling over medication, in contrast to moderate/heavy smokers (> 10 cigarettes per day) (Nguyen 2015).
---
How the intervention might work
Cessation induction interventions directed at smokers who are not ready to quit rely on pharmacological, behavioral, or combination interventions to increase motivation and intention to quit, with an eventual goal of abstinence. Interventions may include nicotine therapy sampling to induce practice quit attempts, as described in Carpenter 2011, or motivational interviewing to induce cessationrelated behaviors among smokers who are not motivated to quit, as examined in Catley 2016. Tobacco dependence treatment can provide motivation and support for change through pharmacotherapy (Cahill 2013), counseling (Lancaster 2017), financial incentives (Notley 2019), or a combination of these (Stead 2016). Pharmacotherapy can reduce the urge to smoke and can decrease nicotine withdrawal symptoms via NRT, varenicline, or bupropion (Cahill 2013); counseling can provide support and motivation to make and continue with quit attempts (Lancaster 2017). For individuals with severe tobacco dependence, such as people experiencing homelessness, multi-component interventions that include behavioral counseling, combination pharmacotherapy, and other adjunctive methods such as financial incentives -as discussed in Businelle 2014b, Baggett 2017, and Rash 2018 -or mobile support -as offered in Carpenter 2015 -may be beneficial. However, as many quit attempts are unassisted, more may need to be done to remove barriers and facilitate access to cessation support for smokers who are homeless.
---
Why it is important to do this review
People experiencing homelessness have unique tobacco use characteristics, including higher likelihood of irregular smoking patterns, reduced exposure to clean indoor air policies, and reliance on "used" cigarettes (Baggett 2016; Garner 2013; Vijayaraghavan 2018). They receive limited support for cessation from service providers (Apollonio 2005; Garner 2013). Many countries have identified homeless-experienced adults as a high-risk group in need of targeted interventions (Fazel 2014). Tobacco use is the single most preventable cause of mortality among adults experiencing homelessness (Baggett 2015). Past efforts to promote tobacco cessation among this population have yielded mixed results that make it difficult to assess which types of tobacco dependence treatments promote abstinence. Our findings will synthesize evidence to date and will identify interventions that increase quit attempts and abstinence, as well as improve access to treatment, for this vulnerable population. We will also explore whether cessation interventions affect mental health or substance use outcomes among this population.
---
O B J E C T I V E S
To assess whether interventions designed to improve access to smoking cessation interventions for adults experiencing homelessness and interventions designed to help adults experiencing homelessness to quit smoking lead to increased engagement and tobacco abstinence. To also assess whether smoking cessation interventions for adults experiencing homelessness affect substance use and mental health.
---
M E T H O D S Criteria for considering studies for this review
---
Types of studies
We will include randomized controlled trials (RCTs) and cluster RCTs, with no exclusions based on language of publication or publication status.
---
Types of participants
Participants will include homeless and unstably housed adults (> 18 years of age). This will be defined by criteria specified by individual studies; however we envisage that participants will meet one or more of the following criteria for homelessness (ANHD 2018; Council to Homeless Persons 2018; Fazel 2014).
1. Individuals and families who do not have a fixed, regular, and adequate night-time residence, including individuals who live in emergency shelters for homeless individuals and families, and those who live in places not meant for human habitation.
2. Individuals and families who will imminently lose their main night-time residence.
3. Unaccompanied young adults and families with children and young people who meet other definitions of homelessness.
4. Individuals and families who are fleeing or attempting to flee domestic violence, dating violence, sexual assault, stalking, or other dangerous or life-threatening conditions that relate to violence against an individual or family member.
5. Individuals and families who live in transitional shelters or housing programs.
6. Individuals and families who are temporarily living with family or friends.
7. Individuals and families who are living in overcrowded conditions. Participants must also be tobacco users who may or may not be motivated to quit.
---
Types of interventions
We will include in our review any interventions that:
1. focus on increasing motivation to quit, building capacity (e.g. providing education or training to provide cessation support to staff working with people who are homeless), or improving access to tobacco cessation services in clinical and non-clinical settings for homeless adults;
2. aim to help people making a quit attempt to achieve abstinence, including but not limited to behavioral support, tobacco cessation pharmacotherapies, contingency management, and app-based interventions; or 3. focus on transitions to long-term nicotine use that do not involve combustible tobacco. Control groups may receive no intervention or 'usual care', as defined by individual studies.
---
Types of outcome measures
---
Primary outcomes
1. Tobacco abstinence (given the paucity of data on long-term cessation outcomes among people experiencing homelessness, we will also assess short-term cessation outcomes), assessed at three time points i) Short-term abstinence: < three months after quit day ii) Medium-term abstinence: ≥ three months and < six months after quit day iii) Long-term abstinence: ≥ six months after quit day We will conduct separate analyses for each time point. We will use the strictest definition of abstinence used by the study, with preference for continuous or prolonged (allowing a grace period for slips) abstinence over point prevalence abstinence. When possible, we will extract biochemically verified rates (e.g. breath carbon monoxide, urinary/saliva cotinine) over self-report. We will assess abstinence on an intention-to-treat basis, using the number of people randomized as the denominator.
---
Secondary outcomes
1. Number of participants receiving treatment 2. Number of people making at least one quit attempt as defined by included studies 3. Abstinence from alcohol and other drugs as defined by selfreported drug use or through biochemical validation (or both), at the longest follow-up period reported in the study 4. Point prevalence or continuous estimates (e.g. questionnaire scores) for mental illnesses (including major depressive disorder, generalized anxiety disorder, post-traumatic stress disorder, schizophrenia, and bipolar disorder) as defined by previously validated survey instruments or physician diagnosis
---
Search methods for identification of studies
---
Electronic searches
We will search the Cochrane Tobacco Addiction Group Specialized Register, the Cochrane Central Register of Controlled Trials (CENTRAL), and MEDLINE. The MEDLINE search strategy is provided in Appendix 1. The Specialized Register includes reports of tobacco-related trials identified through research databases, including MEDLINE, Embase, and PsycINFO, as well as via trial registries and handsearching of journals and conference abstracts. For a detailed account of searches carried out to populate the Register, see the Cochrane Tobacco Addiction Group's website.
---
Searching other resources
We will search grey literature, including conference abstracts from the Society for Research on Nicotine and Tobacco. We will contact investigators in the field about potentially unpublished studies. We will additionally search for registered unpublished trials through the National Institutes of Health clinical trials registry (www.clinicaltrials.gov) and the World Health Organization International Clinical Trials Registry Platform Search Portal ( http:/ /apps.who.int/trialsearch/).
---
Data collection and analysis
---
Selection of studies
We will merge search results using reference management software and will remove duplicate records. Two independent review authors (MV and HS) will examine the titles and abstracts to identify relevant articles and will subsequently retrieve and examine the full-text articles to assess adherence with the eligibility criteria. A third review author (DA) will independently assess whether the full-text articles meet eligibility criteria. We will exclude all studies that do not meet inclusion criteria in terms of study design, population, or interventions. We will resolve disagreements by discussion, and when necessary, the third review author will arbitrate the case.
---
Data extraction and management
Two review authors (MV and HS) will independently extract data in duplicate. We will contact study authors to obtain missing outcome data. Once outcome data have been extracted, one of the review authors (MV) will enter them into Review Manager 5.3, and another (HS) will check them (Higgins 2011). All review authors (MV, HS, and DA) will extract information from each study for risk of bias assessments. We will extract he following information from study reports using a template developed by DA and modified by MV.
1. Source, including study ID, report ID, reviewer ID, citation, contact details, and country.
2. Methods, including study design, study objectives, study site, study duration, blinding, and sequence generation.
3. Participant characteristics, including total number enrolled and number in each group, setting, eligibility criteria, age, sex, race/ethnicity, sociodemographics, tobacco use (type, dependence level, amount used), mental illness, substance use, other comorbidities, and current residence (unsheltered, sheltered, single room occupancy hotel or temporary residence, or supportive housing).
4. Interventions, including total number of intervention groups and comparisons of interest, specific intervention, intervention details, and integrity of the intervention.
5. Outcomes, including definition, unit of measurement, and time points collected and reported.
6. Results, including participants lost to follow-up, summary data for each group, and subgroup analyses. 7. Miscellaneous items, including study author conflicts of interest, funding sources, and correspondence with study authors.
---
Assessment of risk of bias in included studies
Two review authors will assess the risk of bias for each included study, as outlined in the Cochrane Handbook for Systematic Reviews of Interventions, Chapter 8 (Higgins 2011). Using a risk of bias table, we will categorize risk of bias as "low risk," "high risk," or "unclear risk" for each domain, with the last category indicating insufficient information to judge risk of bias. We will assess the following domains: selection bias (including sequence generation and allocation concealment), blinding (performance bias and detection bias), attrition bias (incomplete outcome data), and any other bias. According to guidance from the Cochrane Tobacco Addiction Group, we will assess performance bias only for studies of pharmacotherapies, as it is impossible to blind behavioral interventions.
---
Measures of treatment effect
When possible, we will report a risk ratio (RR) and 95% confidence intervals (CIs) for the primary outcome (i.e. abstinence) for each included study. The risk ratio is defined as (number of participants in the intervention group who achieve abstinence/ total number of people randomized to the intervention group)/ (number of participants in the control group who achieve abstinence/total number of people randomized to the control group). We will use an intention-to-treat analysis, in which participants are analyzed based on the intervention to which they were randomized, irrespective of the intervention they actually received. For dichotomous secondary outcomes, such as number of people making a quit attempt and abstinence from substance use, we will calculate an RR with 95% CI for each study. For any continuous measures of our mental illness secondary outcome, we will calculate the mean difference (MD) or the standardized mean difference (SMD), as appropriate for each study.
---
Unit of analysis issues
The unit of analysis will be the individual. For cluster-randomized trials, we will assess whether study authors have adjusted for this clustering, and whether this had an impact on the overall result. When clustering appears to have had little impact on the results, we will use unadjusted quit rate data; however when clustering does appear to have an impact on results, we will adjust for this using the intraclass correlation (ICC).
---
Dealing with missing data
When outcome data are missing, we will attempt to contact the study authors to request missing data. For all outcomes apart from mental health, we will assume that participants who are lost to follow-up are continuing smokers, are still using other substances, did not make a quit attempt, or did not receive treatment. We will report deaths separately and will not include participants who have died during the analysis. For the mental health outcome, we will conduct a complete case analysis.
---
Assessment of heterogeneity
We will classify heterogeneity as clinical, methodological, or statistical (Higgins 2011). We will not attempt a meta-analysis if we observe significant clinical or methodological heterogeneity between studies; we will instead report results in a narrative summary. If we feel it is appropriate to carry out meta-analyses, we will assess statistical heterogeneity using the I² statistic, which represents the percentage of the effect that is attributable to heterogeneity versus chance alone (Chapter 9; Higgins 2011). We will consider an I² value greater than 50% as evidence of substantial heterogeneity.
---
Assessment of reporting biases
We will assess several forms of reporting bias including outcome reporting bias (selective reporting of outcomes), location bias (publication of research in journals that may have different levels of access such as open access publication), and publication bias (publication or non-publication of studies depending on the direction of outcome effects), and we will discuss these in our review. We will assess whether abstinence from tobacco, our primary outcome, was reported in all included studies, and will report which studies included this outcome and which did not. If we include more than 10 studies in any analyses, we will generate a funnel plot to help us assess whether there could be publication bias.
---
Data synthesis
When meta-analysis is appropriate, we will use the Mantel-Haenszel random-effects method to calculate pooled, summary, weighted risk ratios (95% CIs), or inverse-variance random-effects methods to calculate pooled, summary, weighted MDs (95% CIs) or SMDs (95% CIs). We will pool separately studies testing interventions that aim to improve access to smoking cessation interventions and studies that are simply testing the effectiveness of smoking cessation interventions among people experiencing homelessness. Should meta-analyses not be possible, we will provide a narrative assessment of the evidence.
---
Subgroup analysis and investigation of heterogeneity
When possible, we will conduct subgroup analyses to examine whether outcomes differ based on:
1. intensity of treatment (e.g. number of counselling sessions); 2. participants' residential history (sheltered vs unsheltered); 3. participants' substance use history;
---
4. participants' diagnosis of mental health disorder; and 5. participants' use of non-cigarette tobacco and nicotine products.
---
Sensitivity analysis
We will conduct sensitivity analyses by excluding studies with high risk of bias (judged to be at high risk for one or more of the domains assessed).
---
Summary of findings
We will produce a "Summary of findings" table (Higgins 2011), presenting the primary outcome (tobacco use abstinence at all time points), absolute and relative magnitude of effects, numbers of par-ticipants, and numbers of studies contributing to these outcomes. Two independent review authors will also carry out GRADE assessments of the certainty of evidence. Using GRADE criteria (study limitations, consistency of effect, imprecision, indirectness, and publication bias), we will grade the quality of evidence as very low, low, moderate, or high, and will provide footnotes to explain reasons for downgrading of evidence.
---
A C K N O W L E D G E M E N T S
The review authors would like to thank Drs. Nicola Lindson, Paul Aveyard, and Jonathan Livingstone-Banks for their thoughtful review of draft versions of the protocol.
---
R E F E R E N C E S
---
Additional references ANHD 2018
Association for Neighborhood and Housing Development.
---
C O N T R I B U T I O N S O F A U T H O R S
The protocol was conceived and prepared by Maya Vijayaraghavan, Holly Elser, and Dorie Apollonio.
---
D E C L A R A T I O N S O F I N T E R E S T
Maya Vijayaraghavan has no conflicts of interest to report. MV has one pending grant application on the topic of smoke-free policies in permanent supportive housing for formerly homeless populations.
Holly Elser has no conflicts of interest to report. Dorie Apollinio has no conflicts of interest to report.
---
S O U R C E S O F S U P P O R T Internal sources
• Univeristy of California, San Francisco, San Francisco Cancer Initiative, USA.
---
External sources
• Tobacco Related Disease Reseach Program, USA. Grant | 24,163 | 72 |
6e1b3a67309181ee5532744cc1dcdb77706e024e | Bear in a Window: collecting Australian children’s stories of the COVID-19 pandemic | 2,024 | [
"JournalArticle"
] | The Bear in a Window project captures Australian children's experiences of the COVID-19 pandemic. We focused on children's experiences of lockdown, or extended periods of home confinement, ranging from one to 100 days at a time between 2020 and 2021. Using the online experimental platform, Gorilla, we invited children aged 3-12 to record themselves telling stories about the positives and negatives of life in lockdown to our mascot, Covey Bear. Recordings were saved on the Gorilla server and orthographically and automatically transcribed using Sonix, with manual correction. Preliminary analyses of 18 children's recordings illustrate several emergent topics, reflecting children's experiences of the pandemic in the areas of health and wellbeing; education and online learning; digital engagement; family and friends; relationships; and mealtimes and food. We found that in their storytelling, children engaged in a wide variety of discourse strategies to hold the floor, indicate focus, and transition to different topics. The project will contribute to a national public collection of Australian children's COVID-19 stories and create a digital repository of Australian children's talk that will be available to researchers across different disciplines. | Introduction
Throughout Australia's early waves of the COVID-19 pandemic from March 2020 onwards, a strict lockdown was in place across many cities and towns across the country. A strict lockdown included extended periods of home confinement and the closure of all non-essential workplaces, schools, and retail and entertainment venues. Individuals and families in lockdown were only permitted to go outside for an hour of exercise daily and were required to stay within five kilometres of their home. Some residents of the city of Melbourne put teddy bears in their windows facing towards the street (Figure 1) to attract the attention of people out walking through their rather deserted neighbourhoods. Families with children started to make a game of spotting the bears on their daily walks. This provided the inspiration for the name of our project, Bear in a Window, where we collected Australian children's stories and experiences of the COVID-19 pandemic.
This topic is particularly pertinent considering the fact that the city of Melbourne hit the world record for the number of days spent in lockdown in 2021 (Boaz 2021). The Oxford Blavatnik School of Government rated government restrictions in the pandemic on a scale of 0-100, with a higher number indicating more stringent restrictions (Hale et al. 2021). At 71.76, Australia's score (retrieved on 20 September 2021) was the highest of all OECD countries, with the United States at 61.57 and the United Kingdom at 35.64.
The motivation for our project stems from the fact that children's voices and narratives are often absent from discourse and historical experiential reports of major world events. For example, while there are reports of adult recollections of being a child and living through the Spanish flu pandemic of 1918 (e.g. James 2019), there are very few archives with actual children's reports of their experiences. We know from previous work in Christchurch, New Zealand, where a "QuakeBox" was set up to allow people to share their stories, that giving people voice after major events can have a therapeutic effect (Clark et al. 2016; see also Carmichael et al. 2022). Similar outcomes have emerged from the HONOR project, which is a corpus of interviews on the topic of Hurricane Harvey (Englebretson et al. 2020). Neither the QuakeBox nor the HONOR project include recordings of children. However, the MI Diaries project (Sneller et al. 2022) and Lothian Diary Project (Hall-Lew et al. 2022) invite adults and children to recount their experiences of the COVID-19 pandemic. These projects indicate a relatively recent interest in including the voices of children in the collective memory of major world events, and in (socio)linguistic research more generally. Our project will contribute to this growing area of research and provide a snapshot of a unique event in Australian (and world) history, with data being accessible to researchers and the general public into the future.
The Bear in a Window project provided children with an opportunity to give voice to a range of topics that had importance for them in light of having to stay at home in a state of restricted movement across space and time, without being filtered through the lens of an adult's perspective. In this paper, following a discussion of experiment design and method (Section 2), we present the topics children raised and a linguistic analysis of how children talked about them (Section 3). We explore not just what children say, but how they say it, by examining the discourse structures and features that help to situate or contextualize their perspectives. In Section 4, we reflect on the pros and cons of running this kind of unsupervised data collection entirely online and discuss future steps for the project. In line with the theme of this special issue, we stress that the focus of our paper is on highlighting the procedure and method for online (remote) data collection in a specific context: in this case, the COVID-19 pandemic. Our data set is relatively small (18 speakers), and our analysis is exploratory for now, with a focus on what we have learnt throughout the process (see in particular Section 4).
---
Method and materials
---
Data collection methods
Our project was a fully online, COVID-safe task which we designed and hosted on Gorilla (https://gorilla.sc/), a platform which is commonly used in the behavioural sciences, and which has a user-friendly interface. The learning curve is not too steep, rendering the process of designing online experiments fairly intuitive. The payment structure is affordable, and we opted for the "pay as you go" option, which cost us just over one Australian dollar per completed respondent, only deducted when a respondent fully completed the task. The link to our experiment was available through our project website. We advertised the study via posters and flyers, on social media, and in one TV interview. Recruitment was targeted at parents/guardians of children aged 3-12 years. From the website, parents or guardians could read about the project and project team, and then click on a link to take part, which redirected them to Gorilla. There they were informed that their child needed to complete the task on a tablet, desktop, or laptop. At this stage, the parent or guardian read a plain language statement and gave informed consent. They then completed some demographic questions (age and gender of child, ethnic/cultural background, language(s) spoken at home, and postcode). Following this, and in line with our research aims to capture children's voices and experiences, the parent or guardian was instructed to pass the device to the child, so they could complete the task with minimal prompting from an adult. They were shown the following text on-screen: "Thanks for your help. Now it's your child's turn! When they're in front of the device and ready to go, click Next!" Since this was unsupervised research, the following instructions were shown to parents (in addition to the image in Figure 2):
We want to record your child's stories clearly! Please make sure your child: -Is in a quiet location, with not too much background sound -Stays close to the tablet/laptop device -Doesn't move around too much Before the children commenced recording, they were prompted to do a sound and microphone check, by playing a sound and making sure they heard it, and then recording themselves saying "Hello, Australia!". This was then played back to them. Overall, the quality of our recordings was quite good, but we did have issues with younger siblings being present who talked over some of the recordings. We will discuss these issues further in Section 2.3.
All instructions for children from this point on were provided on-screen and via a friendly voice-over, to accommodate children not yet able to read. Children were asked to record themselves responding to each of the two questions, with an image of our mascot, Covey Bear (Figure 1) shown on-screen. They were asked to consider the following questions, which they responded to one at a time while looking at an image of Covey and an accompanying large sad face for Question 1 (Figure 3) and an accompanying large smiley face for Question 2:
(1) Can you tell Covey a story about something that was not so good about having to stay at home all the time? (2) Can you tell Covey a story about something that was good about having to stay at home all the time?
Children had 2 min to respond to each question, with a graphic timer indicating for them when their time was running out. At the end of each 2 min block, they had the opportunity to extend their comments in a new 2 min recording block.
---
Participants
Eighteen children participated, from four Australian states: Victoria, Tasmania, Western Australia, and New South Wales. Our participants (13 males, 5 females) were aged 3-12 years, and their recordings were orthographically transcribed. All participants were English speakers from a range of linguistic, cultural, and ethnic backgrounds. The average incomes across the postcodes where the participating children lived were higher than the national average.
We note that for an experiment that was live for almost a full calendar year, the total number of participants was below what we had anticipated. The analytics in Gorilla show that there were 18 full completions of the task (listed as "complete" on the Gorilla server). In addition to this, there were 43 participants who started the task and made some recordings but did not "finish" it (listed as "live"), which resulted in the recordings not being analysable since they were not uploaded to the Gorilla server. Finally, there were 64 who started the task, but exited before any recordings were made (listed as "rejected"). Our own test runs of the experiment are included in this "rejected" figure, however.
Of the 43 "live" participants, we consider this to be a high attrition rate. Since the task was designed to not take longer than 15 min to complete, we suspect that one aspect of the task that may have contributed to the attrition rate was that the Finish button is very small in Gorilla, and must be pressed by participants in order for all data to be saved. In an unsupervised task such as this one, it is likely that children or parents simply closed the browser after completing their recordings, without clicking on the Finish button. In the early stages of testing, we noticed the high proportion of "live" participants, and added an extra instruction at the end of the experiment to remind people to "click Finish". However, we only included this in the text on-screen ("Thank you for sharing your stories today! Please click 'Finish' and then you can close your browser window"; see Figure 4) and not in the voice-over, which simply said "Thank you for sharing your stories today!".
---
Overview of the data
The content of the recordings varied largely from one child to another and, despite our original research aim to collect stories, not all children engaged in the telling of narratives. We suspect this may have been due to the way the questions were posed, inviting children to reflect and evaluate on the 'good' and 'bad'. Furthermore, despite our efforts to encourage children to complete the task independently, we noticed in the recordings that some parents (and older siblings) were audibly prompting the children (Example 1). This resulted in some children simply responding to the questions or prompts posed by their parent or sibling, rather than engaging in independent reflection.
(1) Bella (5;2): Um (.) when-when it was good in COVID-teen my mum and me went-went and had breakfast walks and had bear hunts.
---
Parent/ guardian:
You might need to explain what they are. What's a breakfast walk?
---
Bella:
A breakfast walk is a walk when you eat breakfast like toast and crumpets and s-scones and a bear hunt is a hunt where you look in every window to see a bear.
In terms of the 2 min time limit for each recording, this seems to have been appropriate, as only one child opted to extend their time for a further 2 min. While we had anticipated that this might be an issue in terms of children having their talk truncated, our data suggests that the 2 min limit for recordings on Gorilla is not a hindrance for experiments such as these. Furthermore, the two questions seemed to have worked well for the children, despite their large age range, with all children willingly engaging in the task and sharing their experiences.
---
Analysis
Each separate recording generated an audio file on the Gorilla server, which was then downloaded and run through the automatic transcriber, Sonix. In some cases, there was overlapping speech, with siblings talking over one another, or interference from background noise from another sibling playing in the background, rendering the automatic transcription more challenging and the level of accuracy variable. However, the transcription was generally more reliable with the speech of older children and in the absence of overlapping speech. All automatic transcriptions were hand-corrected by two researchers and further refined when it came to the granular Bear in a Window: Australian children's stories of the COVID-19 pandemic discourse analysis, where details such as false starts and filled and unfilled pauses were included. Overall, the automatic transcription process did save time as compared to manually transcribing everything from the start. Following transcription and hand-correction, the text files were imported into ELAN for coding. In undertaking the linguistic analysis, we had two coding tiers in our ELAN files, and broadly followed a discursive psychology (Potter 2012;Potter and Wetherell 1987) discourse analytic approach which empirically examines the ways in which topics of experience are managed in interaction. In the first tier, we coded for topics that emerged in the semi-structured reflections in order to shed light on children's positive and negative experiences of life in lockdown. In the second ELAN tier, we coded for children's discourse strategies.
---
Results
After an iterative, primarily bottom-up process of analysis, we centred on six central topics: health, education, family and friends, digital engagement, relationships, and mealtimes and food (Table 1). Apart from mealtimes and food, which for everyone was unanimously positive (although it was also the topic with the fewest mentions), the remaining five topics cut equally across both positive and negative (the "good" and the "not so good") experiences. We note that many of the utterances were able to be coded across several topics and they were not mutually exclusive, but for the purpose of presentation, we include just one topic per utterance in Table 1.
We then focused our analysis on the children's initial responses to the question prompts and their discourse organization and management strategies, including filled pauses (Swerts 1998), and as an utterance-initial topic transition marker, repair (Schegloff et al. 1977), false starts, and their use of the discourse marker like for sequentially organizing and maintaining flow in their talk (D'Arcy 2017; Degand et al. 2013); see Table 2 for examples.
Overall, in response to the "not so good" things about having to stay at home all the time, participants most frequently mentioned not seeing friends and extended family, not going to school or childcare, and being bullied by classmates during online learning. We note that although children's reflections indicated that they were aware of COVID-19 and its dangers, they did not appear to be feeling afraid or unsafe. The "good" things included getting to have cooked lunches at home, spending more time with family and siblings, going on morning walks and bear hunts in their neighbourhoods, and home exercise opportunities such as trampolining. In terms of discourse strategies, children were highly engaged in the task of narrating their experiences, and they responded well to the idea that an interlocutor was present, using floor holder and topic transition markers (e.g., and also), as they would in a normal, everyday conversation. They also used focus markers such as like to emphasize certain points in their narrative, such as going on a bush walk or to the pool (Table 2). This allowed them to voice their own, localized concerns with an imagined interlocutor, even when that interlocutor was not giving them the conversational feedback and responses they might normally expect. This finding was encouraging, as it showed the strengths of this mode of data collection in eliciting conversational data, despite no interlocutor being present.
---
Discussion and closing considerations
Bear in a Window was a unique opportunity to capture children's voices during an unprecedented time in world history. The capturing and sharing of these voices have important implications for how children's perspectives are included in our collective memory. We believe that the process itself had a therapeutic effect, as providing children with the opportunity to weigh up both the positives and negatives of life in lockdown promoted a sense of perspective, leading to more positive health and well-being. We believe we have contributed to a body of important literature, such as Clark et al. (2016), where people are invited to share their experiences of traumatic events.
Bear in a Window belongs to a first wave of projects run entirely online (see Sneller 2022), joining, for example, the MI Diaries project, which invites participants to share audio diaries of life in Michigan via an app, including aspects of life during the pandemic (Sneller et al. 2022), and the Lothian Diary Project, which investigated how the COVID-19 lockdown changed the lives people in Edinburgh and the Lothians in Scotland (Hall-Lew et al. 2022). All three projects are unique in that they elicit data without a researcher, interviewer, or interlocutor present. In other words, they guide respondents to self-record data which is subsequently used by researchers. This method has the potential to be powerful in the future of linguistic data collection, or in other social sciences, and initial findings in terms of audio quality and content have been promising.
We note that while one drawback of our project was the small number of participants, neither the Lothian Diary nor the MI Diaries projects had this problem, with the former having recorded 195 participants and the latter over 150 diarists at the time of writing. Reasons for this may be that the Lothian Diaries Project had high public visibility, including at an in-person Festival of Social Sciences (Hall-Lew et al. 2022), and the MI Diaries project had legitimacy and visibility through the availability of its app through app stores. The MI Diaries project also reported recruitment success in specific online spaces, such as Reddit and university listservs, rather than via social media more generally (Sneller et al. 2022).
In terms of method, we note that our project had its challenges. This was unsupervised research, and we observed a higher propensity not just for dropout or attrition, but also misunderstanding of instructions, and Filled pause (um)
The bad thing about the lockdown was because-umyou couldn't go to school Topic transition marker (and) And you couldn't see your friends at school and you had to do online classes Topic transition marker (and); repaired false start (Z-on Zoom; they were-they could)
And Z-on Zoom, and they were-they could sometimes be laggy, so you don't understand everyone clearly Topic transition marker (and also); discourse marker (like)
And also you can't go like on a bush walk or to the pool when there was a lockdown because of CO-COVID- Topic transition marker (and); filled pause (um)
And um it could have been there. Repaired false start (you could-you can)
And you could-you can only leave your house for central things Repaired false start (wo-for work); discourse marker (like)
and to like wo-for work, or like to get tested.
Bear in a Window: Australian children's stories of the COVID-19 pandemic mixed quality recordings (this was also observed in the Lothian Diary Project when participants recorded themselves outdoors). Naturally, this was expected in wholly online data collection, but normally, we would expect the benefit of higher participant numbers (due to the accessibility of the task, without having to come onto, e.g., a university campus to take part in the research) to outweigh the drawbacks of attrition rates. However, in our case, we experienced both low participation rates and high attrition rates. To try to understand this trend, we garnered informal feedback from some participants, and from other colleagues who use Gorilla, and we suspect that the following deterrents may have been at play: (i) Gorilla is not mobile-friendly. To complete the task, participants had to switch to using a desktop, laptop, or tablet. This creates an extra hurdle, as we suspect many of the parents heard about the project on social media, which is often or even exclusively accessed via people's mobile phones. The MI Diaries project (Sneller et al. 2022), for example, used a mobile app, which proved to be easily accessible for participants without compromising on sound quality (see also Freeman and De Decker 2021). Having a mobile app downloadable on app stores also gave legitimacy and authenticity to the project and its visual branding (Sneller et al. 2022). (ii) There was no participant payment or incentive offered. Since this was a small-scale project with limited funding, we were not in a position to offer an incentive; however, this could have encouraged more people to participate. The MI Diaries (Sneller et al. 2022) and Lothian Diary (Hall-Lew et al. 2022) projects offered compensation: a USD 5 gift card per 15 min of recording for the former, and for the latter GBP 15 for each standard contribution and GBP 20 for each contribution from someone unhoused or otherwise vulnerable.
It is noteworthy that both of these projects offered alternative options to simple cash payment. In the MI Diaries project, participants could also choose to "pay forward" their payment to someone else; and in the Lothian Diary Project, participants could choose a gift card to a local business or a donation to a local charity. (iii) There was no "hard deadline" for the project, meaning that even people with intentions to participate may have simply put it off, thinking they could complete it at any time. (iv) Participants may have believed that the experiment was finished once they had completed their second recording, and simply closed the browser, rather than clicking the important (but not so visible) Finish button. The fact that parents had to act as ad hoc research assistants, making sure their children completed the required steps, may also have been at odds with our instructions for parents to step away from the device once their children were ready to start their recordings.
In terms of future steps, we plan to collect more data for the project, perhaps in a supervised or semi-supervised fashion, and with an offer of payment or reward. Recent projects utilizing a semi-supervised protocol, where participants take part at home on their own device, but are guided through the task on Zoom by a research assistant, have had high rates of participation and completion, with participants reporting similar experiences of participation as compared to face-to-face data collection (see Leemann et al. 2020). With more data, we plan to expand our analysis of discourse strategies, focussing on topic markers, adjacency pairs, and (where applicable) narrative development. We also hope to be able to examine age and potential gender-based differences with a larger sample. The present analysis, while exploratory, has provided us with the opportunity to reflect on the benefits and drawbacks of remote data collection. We expect this kind of research to remain an option for many scholars beyond the pandemic and we envisage more papers on best practice in this area to emerge in the years to come.
As our own database expands, we will work towards creating a publicly accessible database and work with two museums to curate a collection of children's voices of life in lockdown. Our findings have the potential to be of interest and further application to researchers across different disciplines, including linguistics and language development, education, speech sciences and technology, psychological sciences, health and wellbeing, and language variation and change, including documenting and exploring Australian children's spoken English, and examining how language change spreads within a community and across generations. | 23,831 | 1,261 |
99c927385bb13e8be76c46f007c06dc6457b90cb | Universal Social Protection and Health Care as a Social Common | 2,020 | [
"JournalArticle"
] | COVID-19 reveals the undeniable fact of our interdependence and some hard truths about our economic system. While this is nothing new, it will now be difficult for all those who preferred to ignore some basic facts to go on with business as usual. Our economy collapsed because people cannot buy more than what they actually need. But as the economy grows the more people get sick and need help. And our universal welfare systems never excluded so many people as they do now. The many flaws in the dominant thinking and policymaking do not only refer to our health systems, but are almost all linked to the way the neoliberal globalization is organized. Turn the thinking around, forget the unfettered profit-seeking, start with the real basic needs of people and all the so badly needed approaches logically fall in the basket: the link with social protection, with water, housing and income security, the link with participation and democracy. In this article, I want to sketch the journey from needs to commons, since that is where the road should be leading us to. It goes in the opposite direction of more austerity, more privatization, more fragmentation of our social policies. It also leads to paradigmatic changes, based on old concepts such as solidarity and a new way to define sustainability.The COVID-19 crisis is revealing in many aspects. All of a sudden, one does not have to convince people anymore of the importance of health care and social protection. Surprising as it may sound, for many governments and for many social movements, social protection has not been one of the priorities in their agenda. Some think the private sector will take care of it, others think they have to respect the international fiscal directives, and still others give priority to environmental policies with maybe some vague demand for basic income. If this current crisis could re-direct past thinking into a clear demand for health care and social protection, leaving aside universal basic income and privatizations, one would be able to speak of the silver lining of this coronacrisis. However, in order to so, many traps have to be avoided. In this article I will briefly look at what sideways can better be left behind, what a forward-looking policy can look like and how it can lead to a perspective on social commons and system change. This implies an intersectional approach to health, social protection and several other sectors of social and economic policies. It is the road to the sustainability of life, people, societies and nature. | Contradictions
The most contradictory element in this crisis is the amazing awareness that the economy is collapsing because people are only buying what they really need. For those who never were able to do anything else, nothing changed. But at the same time, the economy was growing with every person that had to be taken to hospital, with every funeral that had to be organized and with every videoconference of people unable to meet for real. It shows once again the absurdity of the blind focus on economic growth in terms of Gross Domestic Product (GDP). Should we not cheer instead of deplore the cuts in luxury consumption, and should we not grieve instead of cheer for the growth due to extra funerals?
Due to the economic backlash, the production of pollutants including CO 2 and nitrogen oxides dropped between 10 and 30% from February to June 2020. But even if lockdown measures continue around the world till the end of 2021, global temperatures will only be 0,01° lower than expected by 2030 (Gohd 2020). In other words, behavioural change is not enough. On stock exchanges, there were some ups and downs, but globally shareholders did not suffer. And Jeff Bezos had not enough time to count his extra profits.
In short, while people were suffering and dying, small businesses lost their income and the dominant economic and financial systems just continued, with some slight changes at the margins. Governments put people at risk by giving priority to economic recovery, loosening the confinement measures before the virus had actually disappeared. Hospitals and care workers were suffering-many of health personnel died, in fact!-because of lack of protective equipment, private hospitals were selective in their admissions, some poor countries even lacked the basic hospital beds.
Once again, Naomi Klein's statement, made in another context, that the economy is at war with life was shown to be true (Klein 2014). The only conclusion, then, is that we have to turn our backs to the neoliberal globalization that frames this economic system and look for the exit. But how? The task before us is to reshape our thinking, knowing that the current system cannot solve our problems which are matters of life, of people, of societies and of nature.
---
Other Ways of Thinking
Let us try to turn our thinking around and not start from the economy but from people's needs. These needs are the same all over the world: they are food, water, shelter, clothing, housing, health care, clean air… in our modern and urban societies we can add other public services such as education, culture, communication or collective transport. In order to meet all those needs, people rightly want protection and this protection, basically, can only be given in two ways in order to safeguard life: either with strong rules, police and the military, or with a broad range of social protection measures, with economic and social rights. If one believes in the importance of peace, the latter is the way to go. Now, there obviously are many different ways to try and guarantee that all people's needs are properly met.
Here, I want to briefly mention three ways that cannot lead to lasting and sustainable solutions. I will then point to the many interlinkages and propose the way of social commons, based on solidarity and the possible synergies between all elements of the social, economic and political systems.
---
Welfare States
The first solution is the existence of welfare states, as we have seen in several richer countries. If we look back at the way they came about, we can only be full of admiration for the social struggles they implied and the institutional arrangements they led to. Most of them have severely been damaged by the neoliberal cuts in social spending of the last decades, the privatization of health care, pension systems and other public services, and the growing delegitimizing of public collective solidarity. But again, looking at what some countries still have, such as Scandinavia, Germany, France or my own country Belgium, this looks like a miracle compared to the poor or non-existing social protection most people in the Global South have. So why not just promote this system in the rest of the world?
The main reason is that the world has changed compared to the period in which welfare states emerged. Women are now massively on the labour market, there are more and more single parent families, there is more migration and the economic system itself has seriously changed. The growing number of people working in the platform economy hardly have any protection. More and more companies rely on temporary workers with less protection. It is true that these welfare states have seriously hindered the emergence of new poverty, but they did not eradicate poverty since they were focused on formal labour markets and did not touch those outside of them. The economic and social rights they provide now have to be extended and enlarged which means a universal implementation, a reform of labour markets with more rights, the transferability of rights for migrant workers, vocational training, etc.
While the basic principles of welfare states, built on solidarity and social citizenship (Marshall 1964;Castel 1995) remain valid, one has to be critical of their bureaucratization and one has to look for better ways to shape the needed solidarity. Welfare states clearly still have to be promoted, but they need a serious re-examination.
---
'Western' Modernity and Basic Income
A second solution to discard is rather popular in some segments of the ecological movement which often puts serious question marks to 'western' modernity and wants to go in the direction of universal basic incomes. In this article I cannot go into the details of this delicate discourse. Let me just say that much has to do with its definition. Based on 'modernization theory' of development studies, implying a linear 'progress' from rural to industrial societies, from subsistence to consumerism, from feudalism to liberal democracies (Rostow 1960), one can feel sympathy for those who reject it. But based on enlightenment thinking with universal human rights, the fundamental equality of all human beings, the separation of religion and state, and maybe most of all the capacity of Kant's 'sapere aude' ('have the courage to know') and of self-criticism, the objections to modernity are more difficult to accept. All too often, anti-modernity leads to fundamentalism, as can be seen in some countries of the Middle East. And most of all, most people in the South do want some kind of modernity, from human rights and democracy to mobile phones. What has to be condemned about the 'western' modernity is that it never applied its valid principles to peoples in the South and that colonizers never allowed these people to define and shape their own modernity (Schuurman 1993). The time has certainly come to take into account the 'epistemologies of the South' (de Sousa Santos 2016).
More often than not all those critical of modernity also reject welfare state types of solidarity, as they think it is linked to reformism and productivism. They prefer a universal basic income (UBI), that is an equal amount of money given unconditionally to all members of society. Again, not all arguments in favour and against this solution can be developed here (Downes and Lansley 2018). But there are serious reasons to reject this solution, the main one being that unequal people have to be treated unequally in order to promote equality (Sen 1992). Some have more demands than others and this should be taken into account. Also, giving money to people who do not need it and who in many cases may not even pay taxes, makes this solution extremely expensive, so that it can only be pursued by drastically cutting down on public services such as health care. In fact, indirectly, by providing money to people and cutting social public expenditures, UBI favours the privatization of public services (Mestrum 2016). Finally, one word has to be said about the kind of solidarity universal basic incomes imply. Welfare states organize a horizontal and structural solidarity of all with all, it is a kind of collective insurance. Basic income, on the contrary, implies a vertical solidarity between the state and a citizen, and another citizen, and another citizen. The message to these citizens is, here is your money, now leave us alone. Take care of yourself. In other words, it is a fundamental liberal solution.
Today, there is a lot of semantic confusion around basic incomes. Many people speak about it and want to promote it, while in fact they only mean to introduce a guaranteed minimum income for those who need it, for those who for one reason or another cannot be active on the labour market. This is a totally different kind of solution that certainly can be supported since it offers income security, a crucial element of wellbeing and social protection.
'Social protection' as used in this article is the overarching umbrella concept for different social policies. It includes social security (social insurances against economic and social hazards such as sickness, unemployment, labour accidents… and collective saving systems for old age pensions), social assistance (helping the poor), public services and labour law. Today, for some international organizations, social protection is more or less synonymous with poverty reduction policies, since they gave up on 'universal' systems for all citizens.
---
Social Protection Floors
The International Labour Organization adopted a Recommendation in 2012 1 on 'Social Protection Floors'. This is a somewhat simplified and reduced-way of putting meat on the bone of its Convention 102 of 1952 on the minimum standards for social security. 2 This initiative certainly can be supported and if ever realized, it would mean a huge progress for all people all over the world. But we have to be aware that it is very limited and includes only income security in case of illness, old age and unemployment, maternity and child care, as well as health care. Given the absence of any kind of social protection in many countries, this would indeed mean progress, but it can hardly be seen as a sufficient protection for a life in dignity.
A supplementary reason why some caution is necessary is the fact that the ILO and the World Bank have engaged in a joint initiative for 'universal social protection'. 3 As we know, it is the World Bank that came out with 'poverty reduction' in 1990 and 'social protection' some twenty years later, all the while refusing to change even one iota to the basics of its neoliberal adjustment policies. The World Bank now proposes a tiered system of social protection, with a limited system for all but more particularly for the poor, presented as a 'poverty prevention package' (Mestrum 2019). They call it 'universal' with their own meaning, that is 'progressive universalism' referring to the 'availability' of benefits when and where they are needed. 4 All this means that there are few arguments to be against this initiative but that it is important to know it is limited, that it will not stop privatizations, on the contrary. In fact, this kind of social protection is at the service of markets, creating private markets for health and education, and protecting people so they can improve their productivity. At the World Bank, the reasoning behind it is purely economic.
---
Interlinkages
What then can be the solution? In his 'Contradictions of the welfare state' Claus Offe stated that capitalism does not want any social protection, while at the same time it cannot survive without it (Offe 1984). It is easy to see that World Bank type solutions belong to the part that capitalism cannot do without. They help to maintain the legitimacy of the system and should prevent people to fall in extreme poverty.
One has to look, then, for the objectives of social protection. If one considers it is indeed protection of people, geared toward social justice and peace-mentioned in the Constitution of the ILO 5 -we have to leave behind economic thinking and start a journey from the basic needs of people. These universal needs have given rise to the definition of human rights, civil, political, economic, social and cultural rights that governments are bound to respect, protect and fullfil. Food, shelter, clothing, housing, health care … no one can do without, though the way these needs can be met will differ from one country to another, from one historical period to another. This indicates a first element that will lead to social commons: people have to be involved in the way their social policies are shaped, they know best what is to be done in a given context, at what moment.
Secondly, and taking into account the current coronacrisis, it is obvious that health care is central but will not be enough. If people have no clean water and soap to wash their hands, there is a problem. If people live in slums or are homeless, they cannot be confined with a whole family and children. If they are street vendors, their choice is dying from hunger or dying from a virus. In other words, their health and indeed survival depend on much more than just doctors, hospitals and medicines. Housing, labour, their natural environment and psychological needs play a direct role as well. More in general, if people lack literacy, they cannot read messages on the dangers of junk food. If their incomes are too limited, they have no money for healthy food. If people have good jobs but are exposed to dangerous substances in their factories, they will get ill. If farmers have to use toxic pesticides on their land, they will get ill, and their produce risks to make consumers ill as well.
Thirdly, it is obvious that preventing is so much better than curing. So, if we really want people to live in good health, beyond curing the illnesses they might suffer from, we necessarily have to start looking at the basic elements of social security: people require income security to protect them from distress, fear and want. Next to that, people need good labour laws to provide and protect their jobs with decent wages and working hours, with a possibility for collective bargaining, with protections against exposures to dangerous substances and other risks.
People will also need public services, health care, obviously, but also education, housing, transport, communica-tion… as well as environmental policies to provide clean air, water and green spaces. It is obvious that in order to tackle all these problems and solutions, one will also have to look at transnational corporations and at the economic system itself. It becomes clear that in order to have healthy food without toxic residues, and housing at affordable prices, free markets will have to be reined in.
In the quest for the alternatives, amongst others one might look at feminist economics, the notion of putting care in the centre. Can an economy at the service of people and of societies not also be an economy of care, caring for the needs of people, producing what people need? That is why the social and solidarity economy, cooperatives and other forms of co-responsible production can offer a perspective for a better future.
---
Social Commons 6
Where, then, do the commons come in?
According to Dardot and Laval's seminal book on the common (Dardot and Laval 2014), commons are the result of a social and political process of participation and democratic decision-making concerning material and immaterial goods that will be looked at from the perspective of their use value, eliminating or severely restricting private ownership and the rights derived from it. They can concern production as well as re-production, they refer to individual and to collective rights.
Following this definition, social protection systems may broadly speaking be considered to be commons as soon as a local community, or a national organization or a global movement decide to consider them as such, within a local, national or global regulatory framework. If they organize direct citizens' participation in order to find out what these social protection systems should consist of and how they can be implemented, they can shape them in such a way that they fully respond to people's needs and are emancipatory.
Considering economic and social rights as commons, then, basically means to democratize them, to state they belong to the people and to decide on their implementation and on their monitoring. This clearly will involve a social struggle, because in the past neoliberal decades these rights have been hollowed out, public services have been privatized and labour rights have weakened if not disappeared. Moreover, democratic systems have been seriously weakened and reduced to a bare minimum the real participation of people. While markets have grown, the public sphere has shrunk.
In other words, this approach allows for doing what was mentioned before: people's involvement in shaping and putting in place social protection processes and systems, which look beyond the fragmented narratives of rights, go beyond disease control and develop instead a truly intersectional approach in order to guarantee human dignity and real sustainability.
One of the positive elements in the current COVID crisis has been the flourishing of numerous initiatives of local solidarities and mutual aid, people helping the homeless and their elderly neighbours, caring for the sick, organising open spaces and playing grounds for kids. This help was crucial for overcoming a very difficult period and it might be a good start for further collective undertakings that could indeed lead to more commons.
Taking into account what was said above on the many interlinkages, this might mean, in the health sector, the putting in place of interdisciplinary health centres, where doctors, care workers, social assistants and citizens cooperate in coordinated community campaigns, planning most of all primary care as a specialty.
However, these local actions cannot be a substitute for a more structural approach. Commons are not necessarily in the exclusive hands of citizens and are not only local. States or other public authorities also have to play their role. We will always need public authorities for redistribution, for guaranteeing human rights, for making security rules, etc. It means they are co-responsible for our interdependence. But the authorities we have in mind in relation to enhancing our economic and social rights or our public services will have to be different from what they are today. We know that public authorities are not necessarily democratic, very often they use public services and social benefits as power instruments or for clientelist objectives. That is why the State institutions and public authorities will themselves have to act as a kind of public service, in real support of their citizens.
In the same way, markets will be different. If social protection mechanisms, labour rights and public services are commons, the consequence is not that there is nothing to be paid anymore. People who work obviously have to be paid, even if they work in a non-profit sector. However, prices will not respond to a liberal market logic but to human needs and the use value of what is produced.
So, if we say social commons go beyond States and markets, we do not say they go without States and markets. It will be a different logic that applies.
---
System Change
By focusing on the individual and collective dimensions of preventive health care and by directly involving people in shaping public policies, the commons approach can become a strategic tool to resist neoliberalism, privatization and commodification, in short, a tool for system change. It will allow to build a new narrative and develop new practices to better and broader organize people's movements. Shaping commons means building power together with others.
Indeed, health and social protection, geared towards social justice, can be an ideal entry point for working on more synergies, beyond the fragmented approaches of social and economic policies. Today, many alternatives are readily available, all with the objective of preserving our natural environment, stopping climate change, reforming the economy away from extractivism and exploitation, restoring public services. Faced with the hollowing out of our representative democracies, many movements are working on better rules for giving all people a voice that is listened to. Even at the level of international organizations, proposals are made to fight tax havens, illicit financial flows and other mechanisms for tax evasion. There is no need to find a big agreement to include them all, since even separately they all can help to get out of the current system destroying nature and humankind.
Neither is social justice the only entry point or the only road to take. Starting from the environment or from the economy, a comparable road can be taken. What it does suppose is that all roads are taken and followed with 'obstinate coherence', that is followed to the end, till the objective of say, social justice, a care economy, full democracy, human dignity with civil, political, economic, social and cultural rights is reached.
The current COVID crisis puts the focus on health and gives us an opportunity for mapping this road, for indicating its possibilities, for showing all the interlinkages and synergies. It is up to social movements and progressive governments to follow that road,to push for changes in sectors that at first sight are not related to the issue one fights for, but in the end are crucial for it. If one works for social protection, one will have indeed to also point to the importance of clean air and good agricultural practices. It might be rather easy to organize commons at the local level, but it is far more difficult to achieve something at the national, let alone the global level. How to tackle global corporations? What we can do is pointing to the different negative effects of their products and practices and link them to a generally accepted goal. That is the importance of the initiative currently taken at the UN Human Rights Council in order to have binding rules for transnational companies to respect human rights. If we want healthy food and if we want to prevent certain types of cancer, we have to ban certain toxic products. It is not easy, the fight will be long and the social struggles may be disrupted at many moments. But is there any other strategy? If we want people to be in good health, in the sense of Alma Ata,7 that is 'a state of complete physical, mental and social wellbeing' as a fundamental human right, we not only have to point to the lack of social protection, but also to some practices of global corporations, from Facebook to Bayer. If we want economic and social rights to be respected, we will have to look at building standards and link them to the cheap clothes made available to western consumers.
What will be needed is a broad effort in popular education. In developed countries of Western Europe too many people do not know anymore where social protection systems come from, how social struggles have made them possible, what kind of solidarity is behind them and why collective solidarity is better than an individual insurance. In many countries of the South people do not even know their rights or do not believe they can be really fulfilled. Some experience already exists with political laboratories where public authorities meet with citizens, health and social professionals as well as citizens and their organizations in order to see how to organize and improve social protection systems.
---
Conclusion
At a time of urgent health needs and social upheaval in numerous countries, at a moment that right-wing populism, authoritarianism and even fascism are re-emerging, it is also extremely urgent for social movements to get their act together. That means, going beyond the usual protests, developing practical alternatives, be watchdogs for public policies and build alternative narratives and practices. Counter-hegemonic movements are needed, at the local, the national, the regional and the global level. 'Long-term social and political change happens more frequently by setting up and maintaining alternative practices than by protest and armed revolution' (Pleyers 2020).
In short, what is urgently needed is counter-power in an interdependent world. We can start by reclaiming social protection, stating it is ours and bring it back to its major objective: to protect people and societies and to promote sustainability of people, societies and nature.
---
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 24,885 | 2,545 |
fbc8b84f2b6fbdc36087b539891dcbe95e9ceed8 | Exploring the impact of early life factors on inequalities in risk of overweight in UK children: findings from the UK Millennium Cohort Study | 2,016 | [
"JournalArticle"
] | Background Overweight and obesity in childhood are socially patterned, with higher prevalence in more disadvantaged populations, but it is unclear to what extent early life factors attenuate the social inequalities found in childhood overweight/obesity. Methods We estimated relative risks (RRs) for being overweight (combining with obesity) at age 11 in 11 764 children from the UK Millennium Cohort Study (MCS) according to socio-economic circumstances (SEC). Early life risk factors were explored to assess if they attenuated associations between SECs and overweight. Results 28.84% of children were overweight at 11 years. Children of mothers with no academic qualifications were more likely to be overweight (RR 1.72, 95% CI 1.48 to 2.01) compared to children of mothers with degrees and higher degrees. Controlling for prenatal, perinatal, and early life characteristics ( particularly maternal pre-pregnancy overweight and maternal smoking during pregnancy) reduced the RR for overweight to 1.44, 95% CI 1.23 to 1.69 in the group with the lowest academic qualifications compared to the highest. Conclusions We observed a clear social gradient in overweight 11-year-old children using a representative UK sample. Moreover, we identified specific early life risk factors, including maternal smoking during pregnancy and maternal pre-pregnancy overweight, that partially account for the social inequalities found in childhood overweight. Policies to support mothers to maintain a healthy weight, breastfeed and abstain from smoking during pregnancy are important to improve maternal and child health outcomes, and our study provides some evidence that they may also help to address the continuing rise in inequalities in childhood overweight. | INTRODUCTION Background
Addressing the obesity epidemic is a global public health priority. In England, roughly one third of children aged between 2 and 15 are overweight/ obese. 1 Being overweight/obese increases the risk of developing type 2 diabetes, heart disease and some cancers. 2 3 Furthermore, childhood overweight/ obesity is associated with social and psychological effects with increased risk of mental health problems, stigmatisation, social exclusion, low selfesteem, depression, and substance abuse. 3 The cost of obesity to the has been estimated at over 5 billion per year in 2007 and is predicted to reach 50 billion per year in 2050. 4 There are large social inequalities found in childhood overweight/obesity: 5 A systematic review of 45 studies from a diverse pool of western developed countries found a consistent relationship between lower socio-economic circumstances (SEC) and obesity risk. The relationship was particularly strong using measures of maternal education as the SEC indicator. Compared to income and occupation, education is suggested to have a stronger influence on parenting behaviours in the pathway from low SEC to development of adiposity. 5 Despite recent evidence suggesting a stabilisation of overweight/obesity prevalence in England, 6 7 socioeconomic inequalities in childhood overweight/obesity continue to widen. 7 A number of studies suggest that early life risk factors, such as parental and maternal smoking during pregnancy, are predictive of childhood overweight/obesity. [8][9][10][11] However, few studies have explored the extent to which these factors attenuate inequalities in overweight/obesity in later childhood. This study therefore aimed to assess whether early-life risk factors attenuate inequalities in overweight/obesity in 11-year-old children from the UK.
---
METHODS
---
Design, setting, and data source
We used data from the Millennium Cohort Study (MCS), a nationally representative sample of children born in the UK between September 2000 and January 2002. Data were downloaded from the UK
---
What is already known on this topic
Childhood overweight and obesity is more common in disadvantaged children, but it is unclear the extent to which early life factors attenuate this relationship.
---
What this study adds
In this large, nationally representative longitudinal study, inequality in overweight and obesity in preadolescence was partially attenuated by early life risk factors including maternal smoking during pregnancy and having a mother who was overweight before pregnancy.
Data Archive in 2014. The study over-sampled children living in disadvantaged areas and those with high proportions of ethnic minority groups by means of stratified cluster sampling design. 12 Further information on the cohort and sampling design can be found in the cohort profile. 12 This study uses data collected on children at 9 months and 11 years. The analysis did not require additional ethical approval.
Outcome measure: overweight (including obesity) At 11 years trained investigators collected data on the height of children to the nearest 0.1 cm and weight to the nearest 0.1 kg. BMI was calculated by dividing weight (in kilos) by height squared (in metres). Being overweight (by combining overweight and obesity scores) was defined using the age and sex specific International Obesity Task Force (IOTF) cut-offs (baseline: thin or healthy weight).
---
Exposure: SEC
The primary exposure of interest was maternal academic qualifications used as a fixed measure of SEC at birth of the MCS child. The highest qualification attained by the mother was established by questionnaire at the first wave, categorised in this study by six levels: degree plus (higher degree and first degree qualifications), diploma (in higher education), A-levels, grades A-C, GCSE grades D-G, and none of these qualifications.
---
Mediators: early life risk factors
We examined the following early life risk factors associated with childhood overweight risk based on findings from a systematic review: (see ref. 13) perinatal factors and exposures during pregnancy: maternal pre-pregnancy overweight (yes or no); maternal smoking during pregnancy (none, 1-10 cigarettes per day (cpd), 11-20 cpd, >20 cpd); birthweight (normal, low, or high), preterm birth (yes or no), caesarean section (yes or no); early life postnatal exposures measured at 9 months: breastfeeding duration (never, less than 4 months, greater than 4 months), early introduction of solid foods (coded as <4 months yes/no as per Department of Health guidance at the time of the survey), and parity.
---
Baseline confounding factors
Sex and ethnicity of child, and maternal age at birth of MCS child are associated with both exposure and outcome measures and so were considered as confounding factors.
---
Analysis strategy
Following the Baron and Kenny steps to mediation, 14 we explored the unadjusted association between maternal qualifications ( primary exposure) and childhood overweight at 11 years (outcome measure). All analyses were conducted in STATA/SE V.13. We explored the associations between potential mediators and overweight, calculating unadjusted relative risks (RRs) using Poisson regression. Following this we explored the association between maternal qualifications and all potential mediators. In the final analysis sequential models were fitted; calculating adjusted RRs using Poisson regression for overweight on the basis of maternal qualification (with children of mothers with highest qualifications as the reference group), adjusting for the potential mediators that were significantly associated with overweight at the p<0.1 level in the univariate analysis.
We used a sequential approach to construct the adjusted models, first adding confounding variables, then perinatal factors and exposures during pregnancy, and finally postnatal exposures, to show the association between SEC and overweight. Mediation was taken to be a reduction in, or elimination of, statistically significant RRs in a final complete case sample. 15 We estimated all model parameters using maximum likelihood, accounting for sample design and attrition. We undertook three sensitivity analyses, repeating the analysis with income as an alternative measure of SEC; calculating the relative index of inequality (RII); and also using the decomposition method. 16 The results from the sensitivity analysis can be found in the online supplementary material.
---
RESULTS
11 764 children were present at 9 months and 11 years with data on overweight status. 9424 (80%) had full data on all exposures of interest in the fully adjusted model. The prevalence of overweight at age 11 was 33.1% in children whose mother had lower qualifications, compared to 20.1% in the highest maternal qualification group (degree plus). All the other covariates of interest, except for sex, varied by level of maternal qualifications (table 1).
---
Associations of covariates with overweight
In the univariate regression, lower maternal qualifications, female sex, mixed, Pakistani, Bangladeshi and black ethnicity, maternal age of 35 and older at MCS birth, maternal prepregnancy overweight, maternal smoking during pregnancy, more than 1 child in the household, high birthweight, caesarean section, breastfeeding for less than 4 months, and introducing solid foods before 4 months were all associated with an increased RR for overweight in children at age 11 (table 2 and figure 1).
Figure 1 shows the unadjusted and fully adjusted covariate estimates. In the fully adjusted model, lower maternal qualifications, female sex, mixed and Pakistani and Bangladeshi and black ethnicity, maternal age of 30 and older at MCS birth, maternal pre-pregnancy overweight, smoking during pregnancy, high birthweight, never breastfeeding, and introducing solid foods before 4 months were all significantly associated with an increase in RR for overweight. There was no significant effect associated with parity, low birthweight, having a preterm birth or caesarean section.
---
Association between maternal academic qualifications and overweight, adjusted for other early life factors
Figure 2 shows the RRs for maternal qualification and overweight before and after adjustment for covariates added sequentially using a life-course approach (see online supplementary material for data tables showing all the model coefficients). The RR increases from 1.72 (95% CI 1.48 to 2.01) to 1.80 (95% CI 1.54 to 2.10) after adjusting for confounders. There are incremental changes in the RR evident after adjusting for maternal pre-pregnancy overweight, maternal smoking during pregnancy, and breastfeeding. In the final full model, the RR comparing lowest to highest qualifications remains significant (1.44, 95% CI 1.23 to 1.69). Repeating the analysis, but only adding maternal pre-pregnancy overweight and maternal smoking during pregnancy to the confounder-adjusted model attenuated the RR to 1.47 (95% CI 1.26 to 1.71), indicating that the percentage of effect mediated by these factors equates to 41.3% (RR reduction).
---
Sensitivity analysis
The conclusions of the study were similar when we used household income as the measure of SEC; when we used RII as the measure of inequality; and when we used an alternative method for mediation analysis (see online supplementary material).
---
DISCUSSION
Using a nationally representative sample, we show that overweight status at age 11 is socially patterned. Lower maternal qualifications, female sex, mixed and Pakistani and Bangladeshi and black ethnicity, maternal age of 30 and older at MCS birth, maternal pre-pregnancy overweight, smoking during pregnancy, high birthweight never breastfeeding, and introducing solid foods before 4 months were associated with an increase in RR for overweight at 11 years. Maternal pre-pregnancy overweight and maternal smoking during pregnancy attenuated the RR in the lowest maternal qualifications group by around 40% suggesting a considerable amount of the social inequalities in preadolescent overweight can be explained by these two variables.
---
Comparison with other findings
Our study corroborates findings from a systematic review: Shrewsbury and Wardle 5 found that 42% of the studies found an inverse association between SEC and adiposity, with the lowest SEC group having the highest level of adiposity. Using parental education as the SEC indicator, 75% of the studies demonstrated an inverse association between SEC and child adiposity. Children whose parents, particularly mothers, have lower levels of education are at a higher risk for developing adiposity. Shrewsbury and Wardle 5 noted Sobal's theoretical framework that suggests education, as an indicator of SEC, influences knowledge and beliefs of parents, which is theorised to have more of an important role in the mechanism linking SEC and development of adiposity than other SEC indicators (eg, income and occupation). Though some of the studies in the review adjusted for confounding, none attempted to explore factors that attenuate the social gradient.
Our study is the first to quantify the contribution of early-life factors in attenuating social inequalities in overweight/obesity on the basis of maternal education level in a nationally representative sample of 11-year-old children in the UK. We found maternal prepregnancy overweight to be an important contributor to inequalities in overweight at 11 years, reducing the RR in the sequential model from 1.8 to 1.6. A recent study in a Dutch cohort found similar results, concluding that parental BMI, maternal prepregnancy BMI, and smoking during pregnancy contributed most to educational inequalities in BMI in 6-year-olds (attenuation -54%, 95% CI -98% to -33% in the lowest educational group). 9 Maternal pre-pregnancy overweight is related to an increase risk of adverse health outcomes for mothers and infants including gestational diabetes, large baby size, and may produce other preprogramming effects related to increased risk in childhood overweight. 17 18 Parental overweight has also been found to potentially contribute to childhood overweight via family eating, activity, and factors in later life relating to child fat intake, snack consumption, and child preference for sedentary activities. [19][20][21] These factors reflect the importance of addressing structural barriers to healthy eating faced by the parents of children growing up in more disadvantaged areas.
Our research findings are similar to a large Irish study investigating determinants of socioeconomic inequalities in obesity in Irish children which identified maternal smoking during pregnancy as a potential mediator. 11 Potential mechanisms include impaired foetal growth followed by rapid infant weight gain; 10 the influence of prenatal smoking on neural regulation causing increased appetite and decreased physical activity; 22 the associations of smoking with other health damaging behaviours after birth; 23 and the contribution of smoking to family poverty, leading to constrained food budgets and fuelling the consumption of cheap, poor quality foods. 11 Further longitudinal research efforts should be dedicated towards discovering the underlying mechanisms linking prenatal smoking to childhood overweight, and the extent to which they may also explain inequalities.
Our study suggests that shorter duration of breastfeeding may make a small contribution to the increased risk of preadolescent overweight in more disadvantaged children. Never breastfeeding was associated with a significantly higher risk of overweight in children at 11 years in the fully adjusted model, corroborating a previous study on the MCS data at an earlier age. 8
---
Strengths and limitations
This study used secondary data from a large, contemporary UK cohort and the results are likely to be generalisable to other high-income countries. A wide range of information is collected in the MCS, which allowed us to explore a range of prenatal, perinatal, and early life risk factors for overweight, including different measures of SEC. Overweight status was based on IOTF cut-offs for BMI, age and sex specific. Children's BMI was calculated based on height and weight measures taken by trained interviewers, reducing reporting bias of family members. However, using BMI as an indicator for adiposity may not be as accurate as measuring total fat mass. 9 Missing data are a ubiquitous problem in cohort studies. Sampling and response weights were used in all analyses here to account for the sampling design and attrition to age 11. A complete case analysis was used, removing individuals with incomplete data. This approach may introduce bias, when the individuals who are excluded are not a random sample from the target population. However in this analysis the sample was sufficiently large, and the internal associations, which were the targets of inference within the sample population, are likely to be valid, but we speculate that they may underestimate the effect sizes in the full UK population. In our analysis of mediation we have followed the Baron and Kenny approach. 14 We used multiple measures of SECs, all of which further supported our main findings. These alternative methods for mediation analysis are continuously being developed, and have their limitations. 24 For example, the KHB model using logistic regression as Poisson regression results were considered "experimental". In this respect our analysis is exploratory and opens up the possibility of more focused mediation analyses to quantify the mediating pathways for specific factors identified in our study.
The positive association between maternal and child overweight may in part reflect non-modifiable (eg, genetic) factors. Furthermore, maternal smoking in pregnancy may itself be a proxy marker for SEC. However, we did observe a dose-response relationship between smoking in pregnancy and overweight/obesity, and the association remained after adjusting for multiple measures of SECs, supporting the notion of a causal link.
Finally, in the absence of randomised control trial data to assess the causal relationship between early life risk factors and childhood overweight, we are reliant on the best quality evidence from prospective observational studies. The systematic review of observational studies by Weng et al, 13 concludes there is "strong evidence that maternal pre-pregnancy overweight and maternal smoking in pregnancy increased the likelihood of childhood overweight". In a nationally representative contemporary cohort of UK children we have shown that maternal overweight and smoking during pregnancy may also account for a significant proportion of the social inequality in overweight/ obesity. However, as Weng et al point out, the association between smoking and obesity/overweight may be confounded by other lifestyle factors, such as poor diet.
---
Policy and practice implications
Policies to support mothers to maintain a healthy weight, breastfeed and abstain from smoking during pregnancy are important to improve maternal and child health outcomes, and our study provides some evidence that they may also help to address the continuing rise in inequalities in childhood overweight. Policies should focus on supporting access to healthy diets, particularly in the pre-conception and antenatal periods, and making healthy eating affordable for disadvantaged families. Future research aimed at reducing childhood obesity should also assess the inequalities impact of interventions in order to build the evidence base to reduce the large social inequalities found in overweight/obesity in childhood.
Contributors SM, SW and DT-R planned the study, conducted the analysis, and led the drafting and revising of the manuscript. SM, SW, AP, BB, CL and DT-R contributed to data interpretation, manuscript drafting and revisions. All authors agreed the submitted version of the manuscript.
---
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
---
Data sharing statement Statistical code and dataset available from corresponding author.
Open Access This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/ licenses/by/4.0/ | 18,520 | 1,746 |
ea09103be8955a86b46f4a9b947a41d700a81b62 | Familial Environment and Overweight/Obese Adolescents’ Physical Activity | 2,019 | [
"JournalArticle",
"Review"
] | 1) Background: Family environments can impact obesity risk among adolescents. Little is known about the mechanisms by which parents can influence obesity-related adolescent health behaviours and specifically how parenting practices (e.g., rules or routines) and/or their own health behaviours relate to their adolescent's behaviours. The primary aim of the study explored, in a sample of overweight/obese adolescents, how parenting practices and/or parental modeling of physical activity (PA) behaviours relate to adolescents' PA while examining the moderating role of parenting styles and family functioning. (2) Methods: A total of 172 parent-adolescent dyads completed surveys about their PA and wore an accelerometer for eight days to objectively measure PA. Parents completed questionnaires about their family functioning, parenting practices, and styles (authoritative and permissive). Path analysis was used for the analyses. (3) Results: More healthful PA parenting practices and parental modeling of PA were both associated with higher levels of adolescents' self-reported moderate-vigorous physical activity (MVPA). For accelerometer PA, more healthful PA parenting practices were associated with adolescents' increased MVPA when parents used a more permissive parenting style. (4) Conclusions: This study suggests that parenting practices and parental modeling play a role in adolescent's PA. The family's emotional/relational context also warrants consideration since parenting style moderated these effects. This study emphasizes the importance of incorporating parenting styles into current familial interventions to improve their efficacy. | Introduction
Over the last three decades, a marked increase in the prevalence of overweight or obese Canadian adolescents has raised concerns [1,2]. To help manage this ongoing problem, research suggests that engaging in positive health behaviours such as increased physical activity (PA) among other behaviours can act as a protective factor against obesity [3,4]. However, adoption of these weight-related health behaviours can be impacted by a number of proximal influences, including the family environment [5]. Indeed, there is evidence that even among children at genetic risk for developing obesity, family/home environment moderates their likelihood of developing obesity in childhood [6]. Therefore, understanding the familial factors that can influence behaviour change among overweight or obese adolescents is essential in order to target these powerful influences.
Parents, in particular, can influence their children directly through parenting practices (i.e., rules or routines) and their own health behaviours, such as modeling PA. Parenting practices are specific actions or strategies parents use to help socialize their children's behaviours [7,8]. In the context of PA, parental support in the form of emotional (e.g., encouragement) [9,10] and logistical (transportation to parks or playgrounds) [11,12] support have been positively associated with adolescents' PA. In contrast, results have generally been mixed when examining parent PA (modeling) on adolescents' PA [13,14]. For instance, accelerometer studies have found positive associations between parent and child/adolescent moderate-vigorous physical activity (MVPA) [14,15], while self-report studies remain inconsistent [16]. Together, these studies highlight the role of parents in influencing their adolescents' behaviours. However, the majority of this research has involved children and adolescents of normal weight. Evidence exploring the relationship between parenting practices and overweight or obese adolescents' PA is lacking [17].
In recent years, family context has emerged as an important factor in the formation of adolescents' PA behaviours. Two main contextual elements of interest include parenting style and family functioning. Parenting style is the emotional climate in which parents raise their child or the way parents interact with their child [18]. According to Baumrind [19,20], three parenting styles exist, including authoritative, authoritarian, and permissive: Authoritative parents exercise control in a supportive and understanding way, by encouraging verbal interaction. Authoritarian parents exercise high control in the form of demands and obedience, while discouraging verbal interaction. Permissive parents exercise minimal control by giving in to their child's demands and provide little to no structure. On the other hand, family functioning acts as an all-encompassing dimension that focuses on how family subsystems interact with one another in terms of their cohesion and flexibility to impact the entirety of behaviors in the family unit. Although research in this area is generally sparse, recent models suggest that since parenting styles and family functioning are considered to be contextual elements, they may function at a higher level and act as moderators [21,22]. Specifically, parenting style and family functioning have the ability to act as moderators and impact child development (e.g., PA behaviours) indirectly by changing the effectiveness of parenting practices and modeling behaviours [18,23]. As a result, how children react to and perceive their parents' wishes/demands may stem from the broader familial environment [7,24]. More specifically, parenting styles and family functioning can attribute a positive or negative undertone to the strategies employed by parents. For instance, parents who exhibit a more controlling parenting style by setting strict boundaries on children's outdoor play may be viewed by their child as heavily controlling if the exchange between the parent and child is such that parents enforce rules which the child must obey-ultimately hindering outdoor play. Alternatively, rule setting has the potential to be regarded as warm if the parent-child dynamic involves an age-appropriate discussion on the reasoning behind the rules, openness to change rules, etc. [25]. Moreover, Kitzmann and colleagues [18], allude to the idea that parents' attempts to engage their children in activities may be more successful when they already enjoy interacting and spending time together as a family (high family functioning). As a result, children may adopt more positive PA behaviours than those families who do not spend much time together (low family functioning). However, more research into these higher-level dimensions is needed to understand the extent to which context promotes PA behaviours among adolescents who are overweight or obese.
Although prior studies have examined parenting practices and parental modeling independently with regards to adolescent PA, less is known whether both factors are jointly important or whether parenting styles and family functioning moderate these associations. Hence, the present study examines how parenting practices, parental modeling, and adolescent PA fit within these broader family-level components. The primary aim of the study (Figure 1) was to assess whether both parenting practices and parental modeling of PA are associated with adolescents' PA, while examining the extent to which parenting styles and family functioning act as moderators. Given that PA parenting practices and parental modeling may be correlated, the secondary aims of this study assessed these relationships separately and examined: 1) The relationships between parenting practices adolescents' PA behaviours while examining the role of parenting styles and family functioning as moderators, and 2) the relationships between parental modeling of PA and adolescents' PA while examining the role of parenting styles and family functioning as moderators. Figure 1 presents conceptual relationships tested in this study guided by Bronfenbrenner's ecological model [26] as well as suggestions from other frameworks [7,18,23,24,27], which considered the moderating role of parenting style and family functioning, under the assumption that the influence of specific PA parenting practices (e.g., logistic support, facilitation) and parental modeling (parent's PA levels and self-report) on adolescent PA may be conditionally related to these higher level parental factors.
Figure 1 presents conceptual relationships tested in this study guided by Bronfenbrenner's ecological model [26] as well as suggestions from other frameworks [7,18,23,24,27], which considered the moderating role of parenting style and family functioning, under the assumption that the influence of specific PA parenting practices (e.g., logistic support, facilitation) and parental modeling (parent's PA levels and self-report) on adolescent PA may be conditionally related to these higher level parental factors.
---
Materials and Methods
---
Study design
This is a secondary analysis of the baseline data collected as part of a study elucidating the individual and household factors that predict adherence to an e-health family-based lifestyle behaviours modification intervention for overweight/obese adolescents and their family [28].
---
Participants
Participants for the analyses included 172 parent/adolescent dyads who filled out a baseline measurement tool prior to starting an e-health family-based lifestyle behaviour modification intervention [28]. Among these families, 68% were recruited via advertisements (newspapers, parenting magazines, Facebook, Craigslist), 28% were previous patients of the British Columbia (BC)
Children's Hospital Endocrinology and Diabetes Clinic or Healthy Weight Shapedown program, and 5% were recruited via word of mouth. Parent-adolescent dyads were eligible to participate in the main study if the adolescent was overweight or obese according to the World Health Organization (WHO) cut-points [2] and the parent consented to take part in the study with them. Additional requirements included having internet at home, residing in the Greater Vancouver (BC) area, no plans to move during the study period (three years), and being literate in English. Adolescents were ineligible to participate in the study if they had any comorbidity (e.g., physical disability) that limited their ability to be physically active or eat a normal diet, a history of psychiatric problems or substance abuse, medication use that impacts body weight, or a Type 1 diabetes diagnosis
---
Procedures
Ethics approval was obtained from the University of British Columbia and the University of Waterloo. At baseline, parents completed a number of online surveys that asked about their parenting practices, parenting styles, and family functioning. Adolescents and parents filled out a series of surveys on their PA habits. Additionally, adolescents and parents were required to wear an accelerometer (over their hip under their clothes) for eight full days following the baseline visit, during waking hours. Finally, adolescent-parent pairs were asked to keep track of their sleep duration and times when they were not wearing the accelerometer in the logbook.
---
Materials and Methods
---
Study design
This is a secondary analysis of the baseline data collected as part of a study elucidating the individual and household factors that predict adherence to an e-health family-based lifestyle behaviours modification intervention for overweight/obese adolescents and their family [28].
---
Participants
Participants for the analyses included 172 parent/adolescent dyads who filled out a baseline measurement tool prior to starting an e-health family-based lifestyle behaviour modification intervention [28]. Among these families, 68% were recruited via advertisements (newspapers, parenting magazines, Facebook, Craigslist), 28% were previous patients of the British Columbia (BC) Children's Hospital Endocrinology and Diabetes Clinic or Healthy Weight Shapedown program, and 5% were recruited via word of mouth. Parent-adolescent dyads were eligible to participate in the main study if the adolescent was overweight or obese according to the World Health Organization (WHO) cut-points [2] and the parent consented to take part in the study with them. Additional requirements included having internet at home, residing in the Greater Vancouver (BC) area, no plans to move during the study period (three years), and being literate in English. Adolescents were ineligible to participate in the study if they had any comorbidity (e.g., physical disability) that limited their ability to be physically active or eat a normal diet, a history of psychiatric problems or substance abuse, medication use that impacts body weight, or a Type 1 diabetes diagnosis.
---
Procedures
Ethics approval was obtained from the University of British Columbia and the University of Waterloo. At baseline, parents completed a number of online surveys that asked about their parenting practices, parenting styles, and family functioning. Adolescents and parents filled out a series of surveys on their PA habits. Additionally, adolescents and parents were required to wear an accelerometer (over their hip under their clothes) for eight full days following the baseline visit, during waking hours.
Finally, adolescent-parent pairs were asked to keep track of their sleep duration and times when they were not wearing the accelerometer in the logbook.
---
Measures
A series of self-report measures were used to capture parenting practices, parenting styles, and family functioning. Adolescent and parent PA were assessed using both self-report and objective measures.
Parenting practices (parent self-report): A family nutrition and PA screening measure [29] was used to assess PA parenting practices. An exploratory factor analysis supported the one-factor structure of the original 15-item scale with a score that had adequate internal consistency (0.70) and was related to body mass index (BMI) categories of children [29]. A four-factor structure, composed of PA, eating, breakfast, and screen time was also supported (Cronbach's alpha coefficients of 0.60, 0.64, 0.55, 0.33, respectively) to examine practices related to specific behaviours. Items consisted of two opposing statements in which parents selected the statement that applied to their child and/or family. For PA practices, three items asked whether the child participates in organized sports, whether the child is spontaneously active, and whether the family is active together. This response style was selected to normalize both positive and negative response options to minimize social desirability bias [30].
Responses were converted to a four-point numerical scale, and reverse coded as needed, so that a score of four indicated more healthful parenting practices.
Parenting styles (parent self-report): Parenting styles were measured using a modified version of Cullen's 16-item authoritative parenting scale [31]. The original measure includes two subscales measured on a four-point Likert scale ranging from never to always., namely authoritative (11 items) and negative (five items) parenting styles and has been previously tested in a sample of ethnically diverse parents and grade four to six students [31]. With regards to item variance, a principal component analysis (PCA) revealed that the authoritative subscale explained 30% while the authoritative subscale explained 11%. Cronbach's alphas for the authoritative and negative subscales were 0.72 and 0.73, and yielded Pearson test-retest correlation coefficients of 0.53 and 0.82, respectively. However, as the structure in the study sample was not supported according to the initial confirmatory factor analysis (X 2 (df = 89) = 187.6, p < 0.00, RMSEA = 0.084 and 90% CI = 0.067-0.101, CFI = 0.844, SRMR = 0.080), the authoritative (e.g., "tell child he/she does a good job", "tell child I like my child just the way he/she is") and negative (e.g., "forget the rules I make for my child", "hard to say no to child") subscales were reduced from ten to three items respectively, along with the addition of two correlated error terms according to modification indices and conceptual relevance. As the content of the remaining three items on the negative parenting scale were more permissive in nature, the scale is referred to as measuring "permissive" parenting. In the present sample, confirmatory factor analysis supported the revised structure (X 2 (df = 62) = 109.8, p < 0.00, RMSEA = 0.070 and 90% CI 0.048-0.091, CFI = 0.919, SRMR = 0.067), and had a Cronbach's alpha of 0.85 for authoritative and 0.59 for permissive. To derive indices, items were summed and dichotomized at the median to split parents into high/low authoritative and permissive style. A fairly even split was met for the authoritative style (72 participants allocated to the high group and 82 to the low group), but not for the permissive style (50 participants allocated to the high group and 120 to the low group) due to a majority of parents scoring at the median.
Family functioning (parent self-report): The Family Adaptability and Cohesion Evaluation Scale IV (FACES IV) [32] assessed family functioning. The original measure comprises 42 items assessed on a five-point Likert scale ranging from strongly agree to strongly disagree. The measure contains six subscale: balanced cohesion (e.g., "feeling very close"), balanced flexibility (e.g., "able to adjust to change"), enmeshed (e.g., "spending too much time together"), disengaged (e.g., "avoid contact with each other"), chaotic (e.g., "never seem to get organized"), and rigid (e.g., "rules for every possible occasions"). These six subscales measured two overarching dimensions of cohesion and flexibility [32].
The six-factor structure was supported in a sample of US post-secondary adults (mean age: 28) and all scales had high internal consistency (Cronbach's alpha 0.77 to 0.89) [32].
Using the conversion chart developed by Olson [32], raw scores for each family functioning subscale were transformed into subscale-specific percentile scores. Cohesion and flexibility ratio scores were computed independently based on percentile scores. Refer to Table 1 for the formulas used to compute the ratio scores. For analytic purposes, the cohesion ratio and flexibility ratio were dichotomized. Participants were classified into the high family functioning group if their ratio scores were above the median on both ratio scores. Those with scores below the median on at least one of these two Ratios were classified as low family functioning. Hence, those families which scored below 1.9 on the cohesion ratio and below 1.4 on the flexibility ratio were categorized as belonging to the high family functioning group. Accelerometer to measure MVPA (worn by child and parent): Two types of accelerometers (Actigraph GT3X or GT3X+) were used to measure MVPA. Parental modeling (PA) was computed using parent MVPA as described below. Data from the Actigraph accelerometers was processed using a program in Stata that processed the data following previous recommendations [33,34]. Data from the accelerometers were collected in spans of 10 seconds and aggregated into one-minute intervals for the analyses. A day of recording was considered valid if the accelerometer was worn at least 10 hours per day, which represents 63% of the time participants are awake (for those who sleep eight hours). Non-wear time was described as a period of at least 60 minutes that resulted in no activity [33]. If participants had three valid days (including one weekend day) of wear time, they were included in the analyses. To help determine the appropriate minutes of MVPA, child and parent-specific MVPA cutoffs were used (≥2296 and ≥1952 accelerometer counts in a one-minute time frame, respectively) [35]. Counts above this cut point were combined to calculate total minutes of MVPA during the assessment week [36]. To determine the average minutes of MVPA at baseline, total MVPA was divided by number of days.
Seven-day physical activity recall (PAR) to measure MVPA (interview-administered to child and parent separately): The seven-day PAR is a semi-structured interview aimed at estimating the amount of MVPA the parent or child has engaged in for 10 minutes or longer in the seven days leading up to the interview. PA Parental modeling was also computed using parent self-report of MVPA as described below. The measure, which is adapted from the Stanford Five-City Project [37], is primarily used to record the intensity and duration minutes) of participants' activities. To aid participants in identifying which level of intensity corresponded to the activity they performed, they were provided an overview of three different levels of intensity. These levels included leisure walking (i.e., relaxing walk), moderate activities (i.e., brisk walking) and very hard activities (i.e., running hard). In addition to the regular interview questions, probing methods were employed to ensure that sufficient information was obtained from each participant. The Compendium of Energy Expenditure for Youth [38] was used to assign the appropriate number of metabolic equivalents (where 1 MET is the amount of energy expended at resting) to each activity the participant performed. Self-reported MVPA time was defined as the average minutes per day spent performing activities that were ≥4 MET [35,39,40]. Time spent in MVPA was computed by summing all the activities above this point. The total minutes in a week was divided by seven to obtain average minutes of MVPA per day.
---
Data Analysis
Path analysis was used to conduct all analyses in Stata 13. Full information maximum likelihood was employed to handle missing data. For all the analyses, two models were run: One model using adolescents' MVPA measured with accelerometry as the dependent variable and another model using adolescents' MVPA measured with self-report as the dependent variable. The analyses for the primary and secondary aims followed the same process: (1) Model 1 tested whether PA-related parenting practices and/or parental modeling (PA) were associated with adolescents' MVPA, and (2) the final model included the relationships tested in Model 1 but added all the moderating variables (i.e., authoritative and permissive parenting styles as well as family functioning) and the relevant interaction terms as depicted in Figure 1. Note that interaction terms were then entered into the analysis one by one for each of the corresponding models and were kept in the model if p < 0.10. All variables were standardized prior to inclusion in models so as to address issues of convergence. Each model was adjusted for the following covariates: adolescent sex, adolescent age, and parental income. The secondary analyses are presented first as they serve to interpret and build the model for the main aim of this study.
To examine assumptions of linear regression, residual plots and bivariate scatterplots were estimated for each model. The magnitude, indicated by the standard coefficient (SC) of a path, as well as the associated p-value, were examined to determine the significance of the path.
---
Results
The analytic sample is characterized in Table 2. As shown in Table 3, adolescents and parents accumulated around half an hour of MVPA per day as measured by accelerometry and about 56 and 69 minutes of self-reported MVPA per day, respectively. The majority of parents had high scores on the authoritative parenting style scale and midrange on the permissive parenting scales, as most scored 6.0 on a scale ranging from 3 to 12. Regarding family functioning, most parents were balanced on the cohesion and flexibility ratios as the mean ratios both exceeded one.
Table 4 displays associations between PA parenting practices and adolescents' MVPA and whether associations were moderated by parenting styles and family functioning. As demonstrated in Table 4, Model 1 (without moderators) highlights that PA parenting practices were significantly associated with adolescents' self-report of MVPA and that there was a trend towards significance (p = 0.06) with adolescents' MVPA measured by accelerometry. Specifically, more healthful PA parenting practices were associated with higher levels of adolescents' MVPA. When the moderators were included in the model, the interaction term between permissive style and PA parenting practices became significant. In contrast, PA parenting practices was the only significant predictor for adolescents' self-report MVPA when the moderators were added. Figure 2 illustrates the interaction of permissive style by PA parenting practices, suggesting that more healthful PA parenting practices were positively associated with adolescents' MVPA but also indicating that this association was more pronounced among adolescents whose parents use a high permissive style compared to those with a low permissive style. As shown in the graph, however, the direction of this association reverses when parents employ less healthful PA practices.
In all models, adolescents' sex was the only significant covariate. The results suggest that adolescent boys had significantly higher MVPA than adolescent girls and this was observed for both accelerometry and self-report assessment of MVPA (Table 3).
Table 5. displays associations between parental modeling of PA and adolescents' MVPA, as well as whether parenting styles and family functioning moderated these associations. As shown in Table 5, Model 1 (without moderators) highlights that parental modeling of PA was significantly associated with adolescents' MVPA for both accelerometer and self-report. Specifically, parents who modeled high levels of PA were associated with increased PA among overweight/obese adolescents. When the moderators were added into these models, no significant effects emerged for parenting styles or family functioning, but parental modeling of PA remained significant. Table 6 displays the association of both PA parenting practices and parental modeling of PA on adolescents' MVPA and whether parenting styles and family functioning moderate these associations. Model 1 (without moderators) highlights that PA parenting practices and parental modeling of PA were significantly associated with self-report of MVPA. Specifically, more healthful PA parenting practices and parental modeling of PA were both associated with higher levels of adolescents' MVPA. Although parental modeling of PA (accelerometry) was significantly associated with adolescents' MVPA measured with accelerometry in Model 1, a trend towards significance (p = 0.07) was observed for this relationship in the final model. When the moderators were added into the model, a significant interaction between permissive style and PA practices was observed. However, this was only observed for MVPA measured with accelerometry. This is similar to our findings reported in Table 4. This finding is illustrated in Figure 2 (see previous figure reported and description) Table 5. displays associations between parental modeling of PA and adolescents' MVPA, as well as whether parenting styles and family functioning moderated these associations. As shown in Table 5, Model 1 (without moderators) highlights that parental modeling of PA was significantly associated with adolescents' MVPA for both accelerometer and self-report. Specifically, parents who modeled
---
Discussion
The purpose of this study was to examine the effect of parenting practices and/or parental modeling on the PA behaviours of overweight/obese adolescents and explore whether parenting styles and family functioning act as moderators. With regards to the primary aim of the study, when considering both PA parenting practices (i.e., facilitation, logistic support) and parental modeling of PA (i.e., PA self-report), both were significantly associated with adolescents' self-report of MVPA-where higher MVPA occurred in families that had more positive parenting practices and modeled an active lifestyle In addition, a significant interaction between permissive style and PA parenting practices emerged for adolescents' MVPA measured with accelerometry-where permissiveness was found to amplify the association between healthy PA parenting practices and adolescents' MVPA. Interestingly, family functioning did not emerge as an important moderator. The findings were similar when PA parenting practices and parental modeling were examined independently (secondary aims), except that the association between parental modeling of PA and adolescents' MVPA measured with accelerometry was significant instead of being borderline significant. Overall, the results highlight the importance of healthy PA parental practices and modeling to support overweight/obese adolescents' MVPA as well as the role of permissiveness in further supporting their engagement in PA.
Given that most of the literature has focused on the influence of parenting practices and modeling separately [12,15,41,42], the present study revealed that parenting practices and parental modeling together, may be important factors in overweight/obese adolescent PA behaviours. The few studies that have explored both practices and modeling together in the context of PA report conflicting results in comparison to the present study. Previous studies have reported that the importance of modeling is diminished by other constructs, such as parental encouragement and support [43][44][45]. For instance, a study conducted among grade 7-12 students found parenting practices, namely parental support, to be more influential than parental modeling [43]. However, these studies targeted a general sample of adolescents while the present study focused on overweight/obese adolescents, which may explain the discrepancies. It may be that parents who are more active or model an active lifestyle are in a better position to support their overweight/obese adolescents' PA as they can, for example, be active together. On the other hand, adolescents who are not overweight/obese may only need support from their parents to be physically active, such as transportation to a playground, while overweight/obese adolescents may need the additional modeling component to enhance their drive and motivation to be active. Therefore, the combination of parental modeling along with specific parenting practices such as taking the child to an appropriate location for PA or encouragement may be necessary to influence the activity of overweight/obese adolescents.
Evidence to support the hypothesis that family functioning would act as a moderator on the relationship between parenting practices and/or parental modeling and adolescents' physical activity, was not found. Of note, few studies to date have explored the role of family functioning as a moderator [18,46]. In a sample of healthy adolescents, one study found evidence of family functioning as a moderator on the relationship between family meals and unhealthful weight management behaviours [46]. Despite no literature exploring family functioning as a moderator within the context of child obesity, a review has provided some indirect evidence to help support this notion [18]. As pointed out by correlational findings, overweight/obese children have a greater likelihood of experiencing more family conflict and less family cohesion compared to their normal-weight counterparts [47,48]. Although the directionality of this effect remains unclear, these correlations suggest that in families where an overweight child is present, more support may be needed to help establish or manage positive health behaviours [18]. Although this review provides some reasoning to support the moderating effect of family functioning on adolescent health behaviours, the evidence base remains unclear [18,24]. In the present study, it is important to note that the null findings apparent for family functioning may be a result of the sample's characteristics. Of note, families in our sample were predominantly balanced in cohesion and flexibility. Therefore, families categorized as high or low functioning may be quite similar to one another. Thus, future research should strive to capture families that truly fit into the high or low family functioning groups to better understand the true potential of family functioning.
The association between parenting practices on adolescents' PA behaviours was moderated by parenting styles, however, it was only partially consistent with the study hypotheses. This finding highlights that the moderating effect of parenting styles on the association between parenting practices on adolescents' PA behaviours was more complex than anticipated. Two other studies have reported similar results, suggesting that more healthful practices performed in a more permissive way are associated with more adolescent MVPA [25,49]. According to Hennessy and colleagues, two types of PA parenting practices (monitoring and reinforcement) were associated with child accelerometer PA when expressed in the context of a permissive parenting style [25]. Similar findings were also observed by Langer and colleagues who found parental support was only associated with adolescent PA when expressed in the context of a permissive parenting style [49]. One potential explanation for this finding may be that permissive parenting characterized by high warmth and low demand is associated with more unstructured playtime and more enjoyable activities [50]. Therefore, being permissive in the context of PA may provide adolescents with more free time for active play and if they feel encouraged and supported by their parent with respect to PA, they may choose to be physically active.
The association between PA parenting practices, styles, and adolescents' MVPA was only observed when adolescents' MVPA was measured by accelerometry. While both accelerometer and self-report measures have been validated to assess PA, there are clear differences in the two measures. For instance, accelerometer data give more accurate estimates of walking-based activities and avoid many of the issues that go along with self-report, such as recall and response bias [51]. However, it is important to highlight that accelerometers are unable to capture certain types of activities, such as swimming and activities involving the use of upper extremities. Compared to direct measures, self-report methods appear to estimate greater amounts of higher intensity (i.e., vigorous) PA than in the low-to-moderate levels [51]. The main difference in the present study is that the self-report MVPA interaction with parenting practices and styles did not appear while it was found with the accelerometer. Measurement error with self-report tends to be higher, as noted by the increased chance of recall and response bias, which may lead to decreased power and perhaps explain why a significant interaction was not observed with the self-report data.
The study has some limitations that should be considered. First, it is difficult to assume a cause and effect relationship due to the cross-sectional nature of the study. For instance, relationships observed in this study may be bi-directional since both parents and children can shape one another [52,53]. Second, measurement errors may have biased study results. MVPA was assessed with both subjective (self-report) and objective (accelerometer) measures. Self-report measures are subject to reporting biases, such as recall and social desirability bias, since individuals are known to have poor recall of past PA levels and tend to overestimate their PA (biased reporting and low validity), respectively [54][55][56]. Therefore, inconsistency in our results may be due to these various forms of measurement error. Third, the parenting measures had some measurement challenges. Of the three parenting styles developed by Baumrind, our study only captured two (authoritative and permissive). Therefore, this may have limited our ability to adequately capture the parenting style of each parent. Additionally, as all parenting measures (parenting practices, parenting styles, and family functioning) were self-reported, our results may have been impacted by social desirability bias. As such, this may have led to an overestimation of the results. Given the strong evidence of familial clustering of obesity [57,58], our study may have benefited from information on parental BMI. Including this variable in our analyses may have attenuated our results, given its association with the family PA environment as well as various parenting behaviours (e.g., monitoring child PA, setting limits for PA) [59]. Finally, our sample included adolescent volunteers who were classified as overweight or obese and were willing to take part in an e-health intervention. Despite the fact that our results may not be generalizable to the general population, it is important to consider overweight/obese adolescents not only because they are typically understudied, but because they are frequently targeted by treatment interventions. This is the first study to explore the moderating effects of both parenting styles and family functioning on adolescents' PA behaviours. It is one of the only studies to examine moderating effects in a sample of overweight or obese adolescents, which is essential when trying to design effective weight-management interventions. Understanding how parenting practices and modeling interact with styles and functioning on adolescents' health behaviours provides useful information for the development of familial interventions. It is also one of the few to use both accelerometers and self-report to directly measure and compare both parent and adolescents' PA levels.
Findings from this study offer implications for intervention development. First, interventionists (e.g., nurse practitioners) should consider parenting factors when counselling families with an overweight or obese adolescent. As part of family-based interventions, interventionists should encourage parents to not only provide support for their child's PA, but modify their own PA. Secondly, family context, specifically, parenting style, may help improve the efficacy of family-based interventions. For example, interventionists could teach parents that parenting styles and practices go hand in hand and elicit different PA behaviours from their adolescents.
---
Conclusions
In conclusion, the study emphasizes the need to consider both parenting practices and parental modeling in shaping overweight/obese adolescents' PA behaviours, as well as acknowledges the importance of using such parenting tactics in the appropriate context.
---
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. | 37,158 | 1,654 |
eed1499bdb7ae6d144a3231b4158e8aeb138b5d0 | Child and adolescent risk factors that differentially predict violent versus nonviolent crime | 2,017 | [
"JournalArticle"
] | While most research on the development of antisocial and criminal behavior has considered nonviolent and violent crime together, some evidence points to differential risk factors for these separate types of crime. The present study explored differential risk for nonviolent and violent crime by investigating the longitudinal associations between three key child risk factors (aggression, emotion dysregulation, and social isolation) and two key adolescent risk factors (parent detachment and deviant peer affiliation) predicting violent and nonviolent crime outcomes in early adulthood. Data on 754 participants (46% African American, 50% European American, 4% other; 58% male) oversampled for aggressive-disruptive behavior were collected across three time points. Parents and teachers rated aggression, emotion dysregulation, and social isolation in fifth grade (middle childhood, age 10-11); parents and youth rated parent detachment and deviant peer affiliation in seventh and eighth grade (early adolescence, age 12-14) and arrest data was collected when participants were 22-23 years old (early adulthood). Different pathways to violent and nonviolent crime emerged. The severity of child dysfunction in late childhood, including aggression, emotion dysregulation, and social isolation, was a powerful and direct predictor of violent crime. Although child dysfunction also predicted nonviolent crime, the direct pathway accounted for half as much variance as the direct pathway to violent crime. Significant indirect pathways through adolescent socialization experiences (peer deviancy) emerged for nonviolent crime, but not for violent crime, suggesting adolescent socialization plays a more distinctive role in predicting nonviolent than violent crime. The clinical implications of these findings are discussed. |
violations, such as theft, drug dealing, and burglary (Henry, Tolan, & Gorman-Smith, 2001; U.S. Office of Justice Programs, 2015). Despite rates of nonviolent crime that are lower than or equal to other industrialized nations, the United States has one of the highest rates of violent crime, especially lethal violent crime ("Countries compared by crime," 2009).
Research suggests that a majority of young offenders engage in nonviolent crime, whereas only a small subset escalates to violent crime (Cohen & Piquero, 2009). Understanding the risk factors that distinguish the small group at highest risk for future violent crime could aid in early detection efforts and inform prevention strategies (Broidy et al., 2003). Most risk research has focused on criminal behavior broadly defined, but a few studies have explored the differential prediction of nonviolent versus violent crime (Loeber & Farrington, 2012;Piquero, Jennings, & Barnes, 2012). This paper adds to this literature by exploring common versus unique predictors of early adult violent versus nonviolent crime in a large sample of at-risk youth followed longitudinally, using multiple informants to assess childhood and early adolescent characteristics, with arrest records to document adult crimes.
---
Common vs. Unique Pathways to Violent and Nonviolent Crime
Extensive research suggests that the roots of antisocial development emerge in childhood, marked by elevated aggression and emotional difficulties, and exacerbated by parent-child conflict and harsh discipline (Dodge, Greenberg, Malone, & CPPRG, 2008). By early adolescence, deviant peer affiliation accompanied by detachment from parents and reduced parental monitoring fosters the initiation of antisocial behavior (Loeber, Burke, & Pardini, 2009).
Within this broad framework, researchers have identified differentiated developmental patterns. For example, Moffitt (2006) introduced the distinction between childhood-onset and adolescent-limited patterns, documenting higher rates of childhood aggression and selfregulatory deficits among youth who initiated antisocial behavior early and showed chronic adult criminal activity, relative to those who began antisocial behavior later and desisted by early adulthood. In a parallel line of inquiry, researchers have documented different etiological and developmental pathways characterizing overt aggression versus covert rulebreaking behavior (see Burt, 2012 for a review). However, rarely are youth followed from childhood through adulthood to determine whether distinct childhood and adolescent experiences differentially predict persisting adult patterns of violent versus nonviolent crime (Loeber & Farrington, 2012). This is a question of high practical significance, given the inordinate human costs of violent crime relative to nonviolent crime (Reingle, Jennings, & Maldonado-Molina, 2012). Some theorists have speculated that nonviolent and violent criminal behavior represent manifestations of the same underlying pathology (e.g., Sampson & Laub, 2003). Indeed, the frequency of nonviolent offending predicts future violent crime, suggesting they represent sequenced outcomes associated with a common antisocial developmental progression (Piquero et al., 2012). In contrast, research has also identified distinct risk factors that specifically predict violent offending (Broidy et al., 2003;Byrd, Loeber, & Pardini, 2012;Nagin & Tremblay, 1999).
---
Dysfunctional Social-emotional Development and Later Violent Crime
The most reliable predictor of later violent crime is elevated aggression in childhood (Loeber et al., 2009;Reingle et al., 2012). Trajectory studies by Nagin and Tremblay (1999) and replicated by Broidy et al. (2003) across six, cross-national, longitudinal data-sets found that boys' violent crime in late adolescence was best predicted by being in the highest trajectory of physical aggression from age 6-15 years. Similarly, several studies have documented higher levels of childhood physical aggression in samples of violent adolescents than those who committed nonviolent or no offenses (Lai, Zing, & Chu, 2015;Reingle et al., 2012). Theorists have suggested that adult violence emerges when an early propensity for hostile, domineering behavior is reinforced and overlearned during childhood and adolescence (Broidy et al., 2003).
In addition to aggressive behavior, significant social and emotional difficulties in childhood may increase risk for later violence. Elevated aggression and the emergence of violence have each been linked with negative emotionality and problematic peer relations (Burt, 2012;Lynam, Piquero, & Moffitt, 2004;Veltri et al., 2014). Developmental theorists have speculated that elevated childhood aggression often reflects reactivity in the more primitive neural circuits associated with the processing of fear and rage, evoked when children feel threatened (Vitaro, Brendgen, & Tremblay, 2002). Adverse living conditions and social isolation undermine the development of core self-regulatory capacities, eliciting defensive anger and fostering emotion dysregulation (Ciccheti, 2002).
Consistent with this developmental analysis, research has linked difficulties regulating emotion and managing anger in childhood with later criminal activity (Eisenberg, Spinrad, & Eggum, 2010), and in some studies, specifically later violence. For example, in the Dunedin longitudinal study, boys who were emotionally dysregulated were more likely to engage in violent (but not nonviolent) offending in early adulthood (Henry, Caspi, Moffitt, & Silva, 1996).
Aggressive children who are emotionally dysregulated are particularly likely to experience peer rejection and social isolation, and thereby become excluded from positive peer socialization opportunities that facilitate the growth of communication skills, empathy, and general social competence (Bierman, 2004). Social isolation, in turn, increases risk for later violence (Hawkins et al., 2000). Children who are isolated from mainstream peers often play with other aggressive children who encourage rebellious behavior and reinforce antisocial norms (Powers & Bierman, 2013). Peer-rejected children appear particularly vulnerable to developing a heightened vigilance for social threat and cues of impending conflict, choosing to act aggressively rather than experience vulnerability (Erath, El-Sheikh, & Cummings, 2009). For these reasons, the combination of childhood aggression, emotion dysregulation, and social isolation may reflect dysfunction in social-emotional development that primes children for later violence, making them more angry, reactive, and easily provoked to attack compared to aggressive children without the same level of concurrent social-emotional risks.
---
Adolescent Predictors of Nonviolent and Violent Crime
The transition into adolescence, generally considered a second phase in the development of antisocial behavior, is normatively accompanied by autonomy-seeking behavior. For many adolescents, the drive to establish autonomy involves purposeful distancing from parents and increased peer engagement (Dishion, 2014). From a social control perspective, distancing from parents, who are likely to reinforce socially normative values, coupled with engagement with peers who are more likely to embrace nonconventional attitudes and rebellious behavior, can lead to the initiation of delinquency (Loeber & Farrington, 2012). When detaching adolescents cease sharing personal information with their parents, it greatly diminishes their parents' ability to monitor them and protect them from risky situations or risky peers (Kerr & Stattin, 2002).
Several studies suggest that adolescent risk-taking, detachment from parents, and deviant peer affiliation may be more strongly associated with nonviolent crime than with the escalation from nonviolent to violent crime, although evidence is mixed (Dishion, 2014;Dodge et al., 2008;Veltri et al., 2014). For example, Capaldi and Patterson (1996) found that reduced parental monitoring predicted both violent and nonviolent arrests in early adulthood, but did not explain unique variance in violent offending once nonviolent offending was considered. In another study, peer delinquency predicted both violent and nonviolent delinquency but showed a stronger association with milder and nonviolent forms of delinquency (Bernburg & Thorlindsson, 1999). In contrast, however, other studies have found peer violence and peer delinquency to predict later engagement in and trajectories of both violent and nonviolent crime (Henry et al., 2001;MacDonald, Haviland, & Morral, 2009).
From a theoretical perspective, detaching from parents and affiliating with deviant peers changes the social norms and controls to which adolescents are exposed and leads to increased engagement in unsupervised activity, often facilitating self-serving behavior and corresponding rule-violations (Dishion, 2014). Most peer-facilitated adolescent antisocial activities fall in the category of nonviolent crimes (e.g., substance use, theft) rather than interpersonal violence. Hence, detaching from parents and affiliating with deviant peers may increase risk for nonviolent crimes, but not necessarily increase risk for the escalation to violent crime, once the association with nonviolent crime is accounted for. Additional research is needed to test this hypothesis.
---
The Present Study
A growing base of research suggests that social-emotional dysfunction in childhood, along with elevated aggression, may indicate unique risk for the emergence of violent crime in later adulthood, both because these characteristics may increase parent detachment and deviant peer affiliation at the transition into adolescence, as well as because these characteristics indicate difficulty managing feelings of intensive anger and social alienation. Yet, unique pathways to violent and nonviolent crime remain under-studied, particularly because few longitudinal studies include measures of childhood social-emotional dysfunction and aggression, and measures of adult violent and nonviolent crime. The present sample included a large number of children living in risky contexts selected from four different areas of the United States and followed longitudinally from elementary school through early adulthood, with multiple measures of child social-emotional and behavioral functioning as well as court records of adult crime. As such, it offered a unique opportunity to explore differential predictors of violent and nonviolent crime, particularly the role of early social-emotional development along with early aggression. A key goal of this study was to better understand the relative roles of childhood social-emotional dysfunction and early adolescent risk factors as differential predictors of violent and nonviolent forms of early adult crime.
Based on research suggesting different pathways to violent and nonviolent crime (Hawkins et al., 2000;Loeber & Farrington, 2012), it was predicted that child aggression, emotion dysregulation and social isolation (reflecting childhood social-emotional dysfunction) would predict violent and nonviolent crime by increasing parent detachment and peer deviance, and also make a direct unique contribution to the prediction of violent crime. Given the less consistent research on associations between early adolescent social experiences and violent versus nonviolent crime, it was predicted that parent detachment and peer deviance would predict both forms of crime, with stronger (unique) contributions to nonviolent crime.
---
Method Participants
Participants were 754 youth (46% African American, 50% European American, 4% other; 58% male) from a multi-site, longitudinal study of children at risk for conduct problems (Fast Track) that also involved a preventive intervention. This study used data collected from 1995 through 2009. Participants were recruited from 27 schools in high-risk areas located in four sites (Durham, NC; Nashville, TN; Seattle, WA; and rural PA.) In the large urban school districts, schools with the highest risk statistics (e.g. highest student poverty; lowest school achievement) were selected for participation; in the three participating rural school districts, all schools participated. All participating schools had kindergartens.
The sample selection proceeded as follows. First, in the late fall of three successive years, teachers rated the aggressive-disruptive behavior of all kindergarten children (total N = 9,594) on 10 items from the Authority Acceptance subscale of the TOCA-R (Werthamer-Larsson, Kellam, & Wheeler, 1991). Children who scored in the top 40% on this teacher screen at each site were identified (N = 3,274) and their parents rated aggressive-disruptive child behavior at home (Achenbach,199l). Teacher and parent screen scores were averaged, and children were recruited beginning with the highest score and moving down the list until desired sample sizes were reached within sites (N = 891 high risk children, including 446 randomized by school to the control group and eligible for this study; see Lochman & CPPRG, 1995 for details). In addition, a normative sample (N = 396) was recruited to be representative of the school population at each site. The normative sample was recruited only from the control schools, so that intervention effects would not affect longitudinal course. For this sample, children were stratified to represent each site population on dimensions of race, sex, and decile of the teacher screen, and then chosen randomly within these blocks for study recruitment. The normative sample included a portion of the high-risk control group to the proportional degree that they represented the school population. The selection of participants into the study is illustrated in Figure 3 (in the on-line appendix). The present study oversampled higher-risk students, including children from both the highrisk (59%) and normative (41%) samples, in order to increase variability in the risk factors and crime outcomes of interest. Of the 754 participants, 20 participants (3%) had no arrest records available. A MCAR test (Little, 1988) indicated that adult crime outcomes were missing completely at random. However, participants with missing data had higher levels of childhood aggression, emotion dysregulation, and youth-rated parent detachment and peer deviancy than participants with data. In structural equation models testing the study hypotheses, full information maximum likelihood estimation was used to account for missing data.
---
Measures
One parent, the primary caregiver, and one teacher (the primary classroom teacher) rated child social-emotional functioning (aggression, emotion dysregulation, social isolation) in fifth grade (age 10-11). Primary caregivers included biological mothers (86%), biological fathers (5%), a grandparent (5%), or other (e.g., step-parents, adoptive parents, or other guardians; 4%). Parents and youth rated parent detachment, and youth rated peer deviancy in early adolescence (age 12-14). Arrest records were collected in early adulthood. Measures are described below; technical reports that provide items and psychometric properties of all measures, are available at the Fast Track study website, http://fasttrackproject.org/datainstruments.php.
Child characteristics in late childhood-At the end of fifth grade, parents and teachers completed the Child Behavior Checklist -Parent and Teacher Report Forms (Achenbach, 1991). To assess aggression distinct from oppositional or hyperactive behavior, a 9-item narrow-band scale validated in a prior study (Stormshak, Bierman, & CPPRG, 1998) was used (e.g., gets in many fights, threatens, destroys things) (α = 0.91 parents, α = . 92 teachers). Similarly, nine items were used to assess a narrow-band scale of social isolation (e.g., withdrawn, sulks, shy) (α = 0.72 parents, α = .79 teachers). For both measures, raw scores were standardized and averaged to create a parent-teacher composite. At the end of fifth grade, teachers also completed the emotion regulation subscale of the Social Competence Scale (CPPRG, 1995), comprised of nine items (each rated on a 5-point scale) assessing the child's ability to regulate emotions under conditions of elevated arousal (e.g., controls temper in a disagreement, calms down when excited or wound up; α = .78). The scale was reverse-scored to represent emotion dysregulation.
Socialization influences in early adolescence-During the summers following seventh and eighth grade, youth and parents completed the Parent-Child Communication Scale, adapted for the Fast Track Project from the Revised Parent-Adolescent Communication Form (Thornberry, Huizinga, & Loeber, 1995). The youth version included 10 items, all reverse scored for this study, assessing perceptions of parent unreceptiveness (e.g., my parent is a good listener, my parent tries to understand my thoughts) and child secrecy (e.g., I discuss problems with my parent, I can let my parent know what bothers me; α = .59). The parent version included 11 items, reverse scored, assessing perceptions of child secrecy (e.g., my child talks to me about personal problems, my child tells me what is bothering him/her), and poor parent communication (e.g., I discuss my child's problem with my child; α = .53). All items were rated on a 5-point scale (from 1 = almost never to 5 = almost always), with high scores indicating more problems.
To assess peer deviancy, youth completed the Self Report of Close Friends (O'Donnell, Hawkins, & Abbott, 1995), describing their first-best and second-best friends' antisocial behavior with a 4-point Likert scale (1 = very much to 4 = not at all). In seventh grade, a 5item version of this scale was used (e.g., gets in trouble with teachers, drinks alcohol, gets in trouble with police; α = .82). In eighth grade, seven additional items were added focused on joint antisocial activities (e.g., you and best friend got in trouble with the police; α = .89).
Arrest records-Adult arrest data were collected from the court system in the child's county of residence and surrounding counties when youth were 22-23 years old. A record of arrest corresponded to any crime for which the individual had been arrested and adjudicated. Exceptions were probation violations and referrals to youth diversion programs for first-time offenders. Court records of conviction were also collected and revealed that 65% of arrests resulted in convictions. Due to the high correlation between arrest and conviction data (.95 for males, .91 for females), only arrest data were examined in this study.
Trained research assistants assigned a severity score to each offense, using a cross-site coding manual based on the severity coding system used by Cernkovich and Giordano (2001). Status offenses and traffic offences were not included in this study due to their frequent occurrence and relatively normative nature among the general population. Nonviolent crimes included those coded at severity levels 2 (trespassing, vandalism, disorderly conduct, possession of stolen goods, possession of a controlled substance) and 3 (theft, breaking and entering, arson, prostitution). Violent crimes included those coded at severity levels 4 (second-degree assault, assault with a deadly weapon, domestic violence, robbery) and 5 (murder, aggravated assault, rape). As such, and consistent with the U.S. Office of Justice Programs definitions (U.S. Office of Justice Programs, 2015), violent crimes represented crimes directed towards people that used force or the threat of force to cause serious harm, and nonviolent crimes represented crimes that did not involve a threat of harm or attack upon a victim. The total number of life-time arrests for nonviolent and violent crimes were tabulated and used as the outcome variables.
---
Procedures
In the spring of children's fifth grade year, research assistants delivered measures to teachers, who then completed them. Parents and youth were interviewed at home in the summer following children's fifth, seventh, and eighth grade years; parents provided informed consent and youth provided assent. Parent interviews were conducted by research assistants who read through the questionnaires and recorded responses. Youth interviews were conducted using computer-administered processes, in which youth completed questionnaires on the computer while listening to the questions via headphones. Prior to all assessments, research assistants were trained in questionnaire administration and all assessment procedures. Financial compensation for study participation was provided to teachers, parents, and children. All study procedures complied with the ethical standards of the American Psychological Association and were approved by the Institutional Review Board of the Pennsylvania State University (#103909).
---
Plan of Analysis
Data analyses proceeded in three stages. First, correlations were run to provide descriptive analyses and demonstrate the simple associations among the study variables. Then, a measurement model was evaluated, to determine the fit of the data to represent five latent constructs (childhood social-emotional dysfunction, early adolescent parent detachment, early adolescent deviant peer affiliation, adult nonviolent crime, adult violent crime). Finally, structural equation models were used to test the study hypotheses. Statistical power analysis, using the Preacher and Coffman (2006) method, indicated a power of 1, indicating high power for detecting poor model fit.
---
Results
---
Descriptive Analyses and Correlations
The means, standard deviations, and ranges for all study variables are shown in Table 1. Tests for sex differences demonstrated that, compared to girls, boys had significantly higher levels of aggression, emotion dysregulation, parent-rated child secrecy, parent-rated poor parent communication, first best friend's antisocial behavior (7 th and 8 th grade), second best friend's antisocial behavior (8 th grade), and nonviolent and violent crime (for all four severity levels).
Correlations among measures of childhood social-emotional dysfunction, parent detachment, and peer deviancy are shown in Table 2. Measures representing the latent constructs used in this study were significantly inter-correlated, ranging from r = .27 to r = . 64 (child social-emotional functioning), r = .29 to r = .72 (parent detachment), and r = .30 to r = .61 (peer deviancy). Measures of child social-emotional dysfunction were significantly correlated with all measures of parent detachment, ranging from r =.14 to r = .32, and with most measures of peer deviancy, ranging from r =.06 to r = .20. Most correlations between parent detachment and peer deviancy were significant, ranging from r =.06 to r = .22.
Correlations between the childhood and adolescent risk factors and adult crime are shown in Table 3. Child aggression and emotion dysregulation significantly predicted all levels of nonviolent and violent crime (range r =.16 to r = .29). Social isolation significantly predicted only violent crime (severity levels 4 and 5, rs =.09 and .08, respectively). Peer deviancy predicted adult nonviolent crime (severity levels 2 and 3, range r =.09 to r = .28) but not violent crime. Parent detachment showed a mixed pattern of significant and non-significant associations with adult crime (range r =.01 to r = .20). These correlations confirm anticipated links between the risk factors and adult crime, with childhood aggression and emotion dysregulation predicting both nonviolent and violent crime, social isolation predicting only violent crime, and peer deviancy and parent detachment predicting primarily nonviolent crime.
---
Structural Equation Models
Next, a measurement model was estimated, with four latent constructs: 1) childhood socialemotional dysfunction (parent and teacher ratings of aggression, emotion dysregulation, and social isolation), 2) early adolescent parent detachment (parent and youth ratings of parent unreceptiveness, poor parent communication, and child secrecy), 3) early adolescent deviant peer affiliation (youth ratings of best friends' deviant behavior), 4) early adult nonviolent crime (severity levels 2 and 3), and 5) early adult violent crime (severity levels 4 and 5). Model fit indices indicated that the predicted relations among observed measures and latent constructs did an acceptable job of representing patterns in the data, χ 2 (df = 76) = 180.79, p < .001, relative χ 2 = 2.38, CFI = .96, RMSEA = .043, 90% CI [.035, .051]. Even though a non-significant χ 2 is preferred, this is rare in large samples, and the relative χ 2 and other fit indices indicate an adequate fit (see Figure 1).
The structural equation model compared the predictive links between child social-emotional dysfunction, early adolescent parent detachment and peer deviancy, and early adult violent and nonviolent crime when examined together in the same model. The overall fit of the structural model was satisfactory, χ 2 (df = 78) = 279.51, p < .001, relative χ 2 = 3.58, CFI = .
92, RMSEA = .059, 90% CI [.051, .066]. As shown in Figure 2, child social-emotional dysfunction in late childhood made significant unique contributions to parent detachment and deviant peer affiliation in early adolescence, as well as significant unique contributions to nonviolent and violent crime in early adulthood, with the strongest contribution to violent crime (β = .48). Deviant peer affiliation in early adolescence made significant unique contributions to nonviolent, but not violent, crime. Parent detachment did not show unique significant associations with nonviolent or violent crime.
---
Discussion
Despite the many serious consequences associated with violent crime, limited research exists on risk factors that uniquely predict violent versus nonviolent crime. In the present study, different pathways to violent and nonviolent crime emerged. The severity of child socialemotional dysfunction (aggression, emotion dysregulation, social isolation) was a powerful and direct predictor of violent crime. Although child dysfunction also predicted a direct pathway to nonviolent crime, the variance accounted for was approximately half the variance accounted for in violent crime. Significant indirect pathways through peer deviancy emerged for nonviolent but not violent, crime, suggesting that this adolescent socialization process plays a more distinctive role in shaping nonviolent than violent crime when both are considered together. Despite significant associations between parent detachment and nonviolent crime, when considered with the other child and adolescent factors, no significant unique pathway emerged.
---
Predicting Violent Crime
In this study, risk for future violent crime was indicated by a childhood profile that included emotional and social dysfunction, as well as aggressive behavior. As children, individuals who later became violent criminals were aggressive (fighting, physically attacking others, destroying others' things) and interpersonally hostile (teasing, threating others). They were also frequently angry and volatile emotionally (difficulties tolerating frustration, calming down when upset, and controlling anger), and socially isolated, reflecting social discomfort (prefers to be alone, shy) and social demoralization (sulks, unhappy). The results are consistent with studies showing robust associations between later violent offending and both childhood aggression (Broidy et al., 2003;Lai et al., 2015) and childhood emotional dysregulation and social isolation (Hawkins et al., 2000;Henry et al., 1996). In addition, by demonstrating the coherence and predictability of a childhood latent factor of socialemotional dysfunction, the present findings extend prior research by suggesting that the behavioral, emotional, and social difficulties experienced by these vulnerable children need to be considered together, and their developmental interplay understood.
It is well-established that children who grow up in contexts characterized by high levels of exposure to conflict and violence are more likely to display aggression and develop antisocial behavior than children growing up in more protected environments (Dodge et al., 2008). Largely, this has been explained by social learning and social control theories that emphasize the role that parents and peers play in modeling, normalizing, and reinforcing aggression (Dishion, 2014;Loeber, et al., 2009). Recent research has also highlighted the way in which chronic stress associated with violence exposure can negatively impact developing neural systems that affect emotional functioning and support self-regulation (Blair & Raver, 2012). Exposure to environments with high levels of conflict and violence may both teach aggressive behavior and undermine the development of emotion regulation, empathy, and self-control. The result may be a transactional process in which emotion dysregulation, aggressive behavior, and social alienation interact over time to increase the propensity for violence (Vitaro et al., 2002). For example, when frustrated or disappointed, emotionally-dysregulated children are less able to modulate their feelings of anger or inhibit their aggressive impulses. Consequently, they are prone to react aggressively when upset, eliciting negative reactions from others, limiting opportunities for positive social interactions, and exacerbating feelings of social alienation (Bierman, 2004;Dodge et al., 2008). This is the first long-term predictive study to document a unique link between these childhood characteristics and later violence, distinguished from nonviolent crime.
---
Predicting Nonviolent Crime
Nonviolent crime in early adulthood was predicted by elevated child social-emotional dysfunction; however, in contrast to violent crime, the direct pathway between child dysfunction and nonviolent crime was smaller and was accompanied by indirect pathways that included deviant peer affiliation. The findings support a cascade model in which childhood social-emotional dysfunction increases risk for peer deviance in early adolescence, which, in turn, increases risk for initiation of crime (Dishion, 2014). The present findings also extend the existing literature, suggesting that deviant peer affiliation predicts primarily to nonviolent (rather than violent) crime when both are modeled together (Bernburg & Thorlindsson, 1999;Veltri et al., 2014). Relatedly, the findings suggest that social control models emphasizing the influence of deviant norms reinforced by antisocial friends (Bernburg & Thorlindsson, 1999) may explain more of the variance in nonviolent than violent crime. This may be in part because deviant peers often endorse rule-breaking behavior, motivated by self-gain, but less often endorse interpersonal violence, which involves a more radical dismissal of social mores with potentially deleterious effects on group cohesion (Bernburg & Thorlindsson, 1999). In the present study, parent detachment was correlated with deviant peer affiliation and adult crime; however, in the structural model, parent detachment made no unique contribution to crime. This suggests that parent detachment alone does not increase risk for engagement in nonviolent crime.
---
Limitations
Several limitations of the current study warrant consideration. First, although the use of the current at-risk sample conferred many advantages by providing rich data on childhood and adolescent risks and adult crime, the sample was not nationally representative. The extent to which the current findings can be generalized to normative populations is not clear. The sample was selected from at-risk communities characterized by elevated rates of poverty and crime which may have heightened the capacity to predict future crime; prediction may be more difficult in communities with lower base rates of crime (Lochman & CPPRG, 1995). Second, although the study utilized several widely-used measures, the parent detachment measure was adapted for the present study and was based on parent and child ratings; a validated observational index of parent-child communication would have strengthened the assessment model. Third, only two indices of adolescent social experiences were assessed in this study (parent detachment, deviant peer affiliation), and other indices may have shown additional effects on crime outcomes. Relatedly, although the assessments in seventh and eighth grade captured risk during the transition to adolescence, it is possible that assessments in later adolescence and more proximal to early adulthood might have yielded somewhat different findings. Still, the study of risk factors in early adolescence is likely to be most informative for early intervention efforts targeting the prevention of criminal behavior.
---
Clinical Implications
The findings suggest that the developmental roots of violent crime may be evident by the end of childhood, that children at high risk for later violence might be identified by late childhood, and that interventions designed to reduce violent crime may be more powerful when they start in childhood. The current findings also suggest that preventive interventions would benefit by focusing concurrently on addressing the emotional and social difficulties of children at high risk, as well as their high levels of aggressive behavior. In contrast, the study findings suggest that prevention efforts targeting nonviolent crime may require particular attention to adolescent social experiences, particularly deviant peer affiliation during early adolescence. Fostering stronger parent-youth communication bonds and structuring free time to reduce opportunities for unstructured deviant peer activity in early adolescence may help in the prevention of nonviolent crime. Yet, given this study's findings of differential patterns of associations between adolescent social experiences and type of adult crime, it is likely that prevention efforts targeting parent-youth bonding and communication and peer affiliations in adolescence alone will have less impact on the reduction of violent crime.
---
Strengths and Future Directions
To date, little longitudinal research has examined the relative roles of child and adolescent risk factors in the unique pathways to violent and nonviolent crime. The current study, with its assessment of risk across two distinct developmental time periods, afforded a unique opportunity to explore the comparative roles of childhood social-emotional dysfunction and early adolescent risk in the development of violent and nonviolent crime. The findings suggest distinct as well as shared developmental pathways (Nagin & Tremblay, 1999), and challenge conceptual frameworks asserting the generality of all forms of criminal behavior. The implications are that deviant peer affiliation in adolescence contributes primarily to nonviolent crime. In contrast, child social-emotional development appears key in the pathway to violent crime. These findings parallel the differential predictors of overt aggression versus covert rule-breaking behavior in childhood and adolescence (Burt, 2012) and suggest potential continuity into differential patterns of adult crime. Given the limited research examining differential prediction of nonviolent and violent crime, and the serious consequences of violent crime, further investigation of pathways to violent crime is warranted. This research should examine risk factors across different developmental periods, include markers of social and emotional functioning, as well as aggressive and antisocial behavior, and explore potential mechanisms of transmission. Selecting High-Risk and Normative Samples a Across three sequential years (cohorts 1-3) children were eligible for the high risk sample based on elevated teacher and parent screens, without regard for sex or race. Assignment to intervention or control group was based on the school they attended in first grade. b Children were eligible for the normative sample only if they were in cohort 1 (not cohort 2 or 3) and if they attended a control school (not an intervention school). Eligible children were stratified by sex and race to represent the school population and then randomly selected from those eligible.
---
Supplementary Material
Refer to Web version on PubMed Central for supplementary material. | 36,508 | 1,820 |
321ba613b077eb11fc23c41838a206e686a91283 | Identifying children exposed to maltreatment: a systematic review update | 2,020 | [
"JournalArticle",
"Review"
] | Background: Child maltreatment affects a significant number of children globally. Strategies have been developed to identify children suspected of having been exposed to maltreatment with the aim of reducing further maltreatment and impairment. This systematic review evaluates the accuracy of strategies for identifying children exposed to maltreatment. Methods: We conducted a systematic search of seven databases: Medline, Embase, PsycINFO, Cumulative Index to Nursing and Allied Health Literature, Cochrane Libraries, Sociological Abstracts and the Education Resources Information Center. We included studies published from 1961 to July 2, 2019 estimating the accuracy of instruments for identifying potential maltreatment of children, including neglect, physical abuse, emotional abuse, and sexual abuse. We extracted data about accuracy and narratively synthesised the evidence. For five studies-where the population and setting matched known prevalence estimates in an emergency department setting-we calculated false positives and negatives. We assessed risk of bias using QUADAS-2. Results: We included 32 articles (representing 31 studies) that evaluated various identification strategies, including three screening tools (SPUTOVAMO checklist, Escape instrument, and a 6-item screening questionnaire for child sex trafficking). No studies evaluated the effects of identification strategies on important outcomes for children. All studies were rated as having serious risk of bias (often because of verification bias). The findings suggest that use of the SPUTOVAMO and Escape screening tools at the population level (per 100,000) would result in hundreds of children being missed and thousands of children being over identified. Conclusions: There is low to very low certainty evidence that the use of screening tools may result in high numbers of children being falsely suspected or missed. These harms may outweigh the potential benefits of using such tools in practice (PROSPERO 2016:CRD42016039659). | Background
Child maltreatment, including physical abuse, sexual abuse, emotional abuse, and neglect impacts a significant number of children worldwide [1][2][3]. For example, a survey involving a nationally representative sample of American children selected using telephone numbers from 2013 to 2014 found that lifetime rates of maltreatment for children aged 14 to 17 was 18.1% for physical abuse, 23.9% for emotional abuse, 18.4% for neglect, and 14.3% and 6.0% for sexual abuse of girls and boys respectively [4]. Child maltreatment is associated with many physical, emotional, and relationship consequences across the lifespan, such as developmental delay first seen in infancy; anxiety and mood disorder symptoms and poor peer relationships first seen in childhood; substance use and other risky behaviours often first seen in adolescence; and increased risk for personality and psychiatric disorders, relationship problems, and maltreatment of one's own children in adulthood [5][6][7][8][9]. Given the high prevalence and serious potential negative consequences of child maltreatment, clinicians need to be informed about strategies to accurately identify children potentially exposed to maltreatment, a task that "can be one of the most challenging and difficult responsibilities for the pediatrician" [10]. Two main strategies for identification of maltreatment-screening and case-finding-are often compared to one another in the literature [11,12]. Screening involves administering a standard set of questions, or applying a standard set of criteria, to assess for the suspicion of child maltreatment in all presenting children ("mass screening") or high-risk groups of children ("selective screening"). Case-finding, alternatively, involves providers being alert to the signs and symptoms of child maltreatment and assessing for potential maltreatment exposure in a way that is tailored to the unique circumstances of the child.
A previous systematic review by Bailhache et al. [13] summarized "evidence on the accuracy of instruments for identifying abused children during any stage of child maltreatment evolution before their death, and to assess if any might be adapted to screening, that is if accurate screening instruments were available." The authors reviewed 13 studies addressing the identification of physical abuse (7 studies), sexual abuse (4 studies), emotional abuse (1 study), and multiple forms of child maltreatment (1 study). The authors noted in their discussion that the tools were not suitable for screening, as they either identified children too late (i.e., children were already suffering from serious consequences of maltreatment) or the performance of the tests was not adaptable to screening, due to low sensitivity and specificity of the tools [13].
This review builds upon the work of Bailhache et al. [13] and performs a systematic review with the objective of assessing evidence about the accuracy of instruments for identifying children suspected of having been exposed to maltreatment (neglect, as well as physical, sexual abuse, emotional abuse). Similar to the review by Bailhache et al. [13], we investigate both screening tools and other identification tools or strategies that could be adapted into screening tools. In addition to reviewing the sensitivity and specificity of instruments, as was done by Bailhache et al. [13], for five studies, we have also calculated estimates of false positives and negatives per 100 children, a calculation which can assist providers in making decisions about the use of an instrument [14]. This review contributes to an important policy debate about the benefits and limitations of using standardized tools (versus case-finding) to identify children exposed to maltreatment. This debate has become increasingly salient with the publication of screening tools for adverse childhood experiences, or tools that address child maltreatment alongside other adverse experiences [15,16].
It should be noted here that while "screening" typically implies identifying health problems, screening for child maltreatment is different in that it usually involves identifying risk factors or high-risk groups. As such, while studies evaluating tools that assist with identification of child maltreatment are typically referred to as diagnostic accuracy studies [17], the word "diagnosis" is potentially misleading. Instead, screening tools for child maltreatment typically codify several risk and clinical indicators of child maltreatment (e.g., caregiver delay in seeking medical attention without adequate explanation). As such, they may more correctly be referred to as tools that identify potential maltreatment, or signs, symptoms and risk factors that have a strong association with maltreatment and may lead providers to consider maltreatment as one possible explanation for the sign, symptom, or risk factor. Assessment by a health care provider should then include consideration of whether there is reason to suspect child maltreatment. If maltreatment is suspected, this would lead to a report to child protection services (CPS) in jurisdictions with mandatory reporting obligations (e.g., Canada, United States) or to child social services for those jurisdictions bound by occupational policy documents (e.g., United Kingdom) [18]. Confirmation or verification of maltreatment would then occur through an investigation by CPS or a local authority; they, in turn, may seek consultation from one or more health care providers with specific expertise in child maltreatment. Therefore, throughout this review we will refer to identification tools as those that aid in the identification of potential child maltreatment.
---
Methods
A protocol for this review is registered with the online systematic review register, PROSPERO (PROSPERO 2016:CRD42016039659) and study results are reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist (see supplemental file 1). As the review by Bailhache et al. [13] considered any English or French materials published between 1961 and April 2012 (only Englishlanguage materials were retrieved from their search), we searched for English-language materials published between 2012 and July 2, 2019 (when the search was conducted). Additional inclusion criteria are found in Table 1. Inclusion criteria for this review were matched to those in Bailhache et al.'s [13] review. We included diagnostic accuracy studies [17] that 1) evaluated a group of children by a test, examination or other procedure (hereafter referred to as the index test) designed to identify children potentially exposed to maltreatment and also 2) evaluated the same group of children (or ideally a random subsample) by a reference standard (acceptable reference standards are listed in Table 1) that confirmed or denied exposure to potential maltreatment. We excluded articles that assessed psychometric properties of child maltreatment measures unless diagnostic data was available in the paper.
The searches for the review update were conducted in seven databases: Medline, Embase, PsycINFO, Cumulative Index to Nursing and Allied Health Literature, Sociological Abstracts, the Education Resources Information Center, and Cochrane Libraries (see supplemental file 2 for example search). Forward and backward citation chaining was also conducted to complement the search. All articles identified by our searches were screened independently by two reviewers at the title and abstract and full-text level. An article suggested for inclusion by one screener was sufficient to forward it to full-text review. Any disagreements at full text stage were resolved by discussion.
---
Data extraction and analysis
For all included studies, one author extracted the following data: study design, the study's inclusion criteria, form of potential child maltreatment assessed, index tool, sample size, reference standard, and values corresponding to sensitivity and specificity. While our original protocol indicated that we would extract and analyze data about child outcomes (e.g., satisfaction, well-being), service outcomes (e.g., referral rates), and child wellbeing outcomes (e.g., internalizing symptoms, externalizing symptoms, suicidal ideation) from the studies (e.g., from randomized trials that evaluated screening versus another identification strategy and assessed associated outcomes), no such data were available. Extracted data were verified by a second author by cross-checking the results in all tables with data from the original articles. Disagreements were resolved by discussion.
Sensitivity and specificity are "often misinterpreted and may not reflect well the effects expected in the population of interest" [14]. Other accuracy measures, such as false positives and false negatives, can be more helpful for making decisions about the use of an instrument [14], but determining them requires a reasonable estimate of prevalence in the intended sample (in this case of the exposure, child maltreatment) and in the intended setting (e.g., emergency department). Although there are no clear cut-off points for acceptable proportions of false negatives and positives, as acceptable cutoffs depend on the clinical setting and patient-specific factors, linking false positives and negatives to downstream consequences (e.g., proportion of children who will undergo a CPS investigation who should not or who miss being investigated) can assist practitioners in determining acceptable cut-offs for their practice setting.
For those studies where prevalence estimates were available, sensitivity and specificity values were entered into GRADEpro software in order to calculate true/false positives/negatives per 100 children tested. This free, online software allows users to calculate true/false positives/negatives when users enter sensitivity and specificity values of the index test and an estimate of prevalence. In GRADEpro, true/false positives/negatives can be calculated across 100, 1000, 100,000, or 1,000,000 patients. We selected 100 patients as a total, as it allows easy conversion to percentage of children. We also give an example of true/false positives/negatives per 100,000 children tested, which is closer to a population estimate or numbers across several large, emergency departments. To calculate these values, two prevalence rates were used (2 and 10%) based on the range of prevalence of child maltreatment in emergency departments in three high-income country settings [20], as most of the identified screening tools addressed children in these settings. Use of these prevalence rates allow for a consistent 3. Comparator (reference test). Studies had to have an acceptable reference standard, i.e. "expert assessments, such as child's court disposition; substantiation by the child protection services or other social services; assessment by a medical, social or judicial team using one or several information sources (caregivers or child interview, child symptoms, child physical examination, and other medical record review)" [13].
4. Outcomes. Studies had to assess one of the following outcomes: sensitivity, specificity, positive predictive value, or negative predictive value.
5. Study design. Studies need not include a comparison population (e.g., case series could be included if the intention was to evaluate one of the outcomes listed above).
---
Exclusion criteria
1. Ineligible population. Studies that only addressed adults' or children's exposure to intimate partner violence.
2. Ineligible intervention (index test). Studies that identified a clinical indicator for child maltreatment, such as retinal hemorrhaging, but not child maltreatment itself and tools that identified a different population (e.g., general failure to thrive, children's exposure to intimate partner violence).
3. Ineligible comparator (reference test). Studies that did not have an acceptable reference standard (e.g., parent reports of abuse were ineligible).
4. Ineligible outcomes. Studies that at minimum did not set out to evaluate at least one of the following accuracy outcomes: sensitivity, specificity, positive predictive value, negative predictive value.
5. Ineligible publication types. Studies published as abstracts were excluded, as not enough information was available to critically appraise the study design. Also excluded were studies published in non-article format, such as books or theses. The latter were excluded for pragmatic issues, but recent research suggests that inclusion of these materials may have little impact on results [19].
comparison of true/false positives/negatives per 100 children across all applicable studies. For consistency and to enhance accuracy of calculations in GRADEpro of true/ false positives/negatives proportions per 100, where possible, all sensitivity and specificity values and confidence intervals for the included studies were recalculated to six decimal places (calculations for confidence intervals used: p ± 1.96 × √p(1-p)/n]). In GRADEpro, the formula for false positives is (1 -specificity)*(1 -prevalence) and the formula for false negatives is (1 -sensitivity)*(prevalence). As the majority of studies differed in either a) included populations or b) applied index tests, we were unable to pool data statistically across the studies. Instead, we narratively synthesized the results by highlighting the similarities and differences in false positives/negatives across the included studies.
For the population estimate, we modeled the effects of the SPUTOVAMO checklist for children with physical abuse or neglect on downstream consequences for children under 8 years of age presenting to the emergency department with any physical injury. We calculated true/ false positives/negatives per 100,000 using the lower end of the prevalence range (2%) [20]. Based on American estimates, we assumed that 17% of children who are reported to child welfare are considered to have substantiated maltreatment and among children with substantiated maltreatment, 62% may receive post-investigation services [21]. We also modeled downstream consequences of false negatives, based on an estimate that 25 to 50% of children who are exposed to maltreatment need services for mental health symptoms [22]. We modeled consequences of false positives by assuming that all suspicions lead to reports which lead to CPS investigations.
---
Critical appraisal
One author critically appraised each study using the QUADAS-2 tool [23] and all data were checked by a second author, with differences resolved through consensus. The QUADAS-2 tool evaluates risk of bias related to a) patient selection, b) index test, c) reference standard, and d) flow and timing. Questions related to "applicability" in QUADAS-2 were not answered because they overlap with questions involved in the GRADE process [17]. As the developers of QUADAS-2 note [23], an overall rating of "low" risk of bias is only possible when all domains are assessed as low risk of bias. An answer of "no" to any of the questions indicates that both the domain (e.g., "patient selection") and the overall risk of bias for the study is high. In this review, a study was rated as "high" risk of bias if one or more domains was ranked as high risk of bias, a study was ranked as "low" risk of bias when all domains were rated as low risk of bias and a study was ranked as "unclear" risk of bias otherwise (i.e., when the study had one or more domains ranked as "unclear" risk of bias and no domains ranked as "high" risk of bias).
---
Grading of recommendations, assessment, development and evaluation (GRADE)
Evidence was assessed using GRADE [17]. GRADE rates our certainty that the effect we present is close to the true effect; the certainty that the effect we present is close to the true effect is rated as high, moderate, low or very low certainty. A GRADE rating is based on an assessment of five domains: (1) risk of bias (limitations in study designs); ( 2) inconsistency (heterogeneity) in the direction and/or size of the estimates of effect; (3) indirectness of the body of evidence to the populations, interventions, comparators and/or outcomes of interest; (4) imprecision of results (few participants/events/observations, wide confidence intervals); and (5) indications of reporting or publication bias. For studies evaluating identification tools and strategies, a body of evidence starting off with cross-sectional accuracy studies is considered "high" certainty and then is rated down to moderate, low, or very low certainty based on the five factors listed above.
---
Results
The updated search and citation chaining retrieved 3943 records; after de-duplication, 1965 titles and abstracts were screened for inclusion (see Fig. 1). From this set of results, 93 full-text articles were reviewed for inclusion, of which 19 new articles (representing 18 studies) were included [24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42]. In addition, the 13 studies evaluated in the Bailhache et al. review [43][44][45][46][47][48][49][50][51][52][53][54][55] were included in this review update, for a total of 32 articles (31 studies).
---
Study characteristics
Overall, we did not find any studies that measured important health outcomes after the use of a screening tool or other instrument. Instead, the included tools and strategies provided accuracy estimates for a range of maltreatment types (see supplemental file 3 for study characteristics), including multiple types of maltreatment (6 studies); medical child maltreatment (also known as caregiver fabricated illness in a child, factitious disorder imposed on another and Munchausen syndrome by proxy, 1 study); sexual abuse (7 studies), including child sex trafficking (3 studies); emotional abuse (1 study); and physical abuse (18 studies), including abusive head trauma (11 studies).
---
Risk of bias and GRADE assessment of included studies
One study was rated as having an unclear risk of bias and all remaining studies were rated as high risk of bias, with 23 studies (72%) having high risk of bias across two or more domains (see supplemental file 4 for critical appraisal rankings). A number of studies used very narrow age ranges to test their index test, representing potentially inappropriate exclusions for the basis of studying identification strategies. For example, while very young children (under 5 years of age) are at most risk of serious impairment and death from physical abuse including abusive head trauma, rates of non-fatal physical abuse peak between 3 and 12 years [56]. Ideally, index tests that seek to identify potential physical abuse should address all children who are legally entitled to protection (or at a minimum, address children ≤12 years).
A number of studies did not apply the reference standard to all children and instead only applied it to a subset of children who were positively identified by the index test or some other method, which can lead to serious verification bias (i.e., no data for the number of potentially maltreated children missed). For example, the reference standard was applied to only 55/18275 (0.3%) of the children in the study by Louwers et al. [26]. Only Sittig et al. [27], in a study assessing one of the recently published screening tests, applied the reference standard to a random sample of 15% of the children who received a negative screen by the index test, thereby reducing the potential for serious verification bias. A few studies also used the index test as part of the reference standard, which can lead to serious incorporation bias. For example, Greenbaum et al. [37] noted that the 6-item child sex trafficking screening questions were "embedded within the 17-item questionnaire," which was used by the reference standard (health care providers) to determine if child sex trafficking potentially occurred.
Using the GRADE approach to evaluate the certainty of evidence, the included studies started at high certainty as all but six studies were cross-sectional studies. The evidence was rated down due to very serious concerns for risk of bias (making the evidence "low" certainty) and further rated down for imprecision (making the evidence "very low" certainty).
---
General accuracy
Table 2 reports sensitivity and specificity rates for each study. Studies are organized according to child maltreatment type (multiple types of maltreatment, medical child maltreatment, sexual abuse, child sex trafficking, emotional abuse, physical abuse and neglect, and abusive head trauma). The type of child maltreatment assessed by each tool is specified, as is the name of the identification strategy.
In addition to the studies previously reviewed by Bailhache et al. [13], this systematic review update identified three screening tools, as well as an identification tool for medical child maltreatment, "triggers" embedded in an electronic medical record, four clinical prediction tools, and two predictive symptoms of abusive head trauma. False positive/negative values are reported only for the studies using screening tools with samples where the prevalence of child maltreatment could be estimated; all values for the studies identified in the Bailhache et al. [13] review are available in Table 2.
---
Screening instruments
Three screening instruments were identified in this systematic review update: 1) the SPUTOVAMO checklist, 2) the Escape instrument, and 3) a 6-item screening questionnaire for child sex trafficking. The SPUTOVAMO checklist [24,27,28,42] is a screening instrument that determines whether there is a suspicion of child maltreatment via a positive answer to one or more of five questions (e.g., injury compatible with history and corresponding with age of child?). Its use is mandatory in Dutch emergency departments and "out-of-hours" primary care locations. Two studies [24,42] evaluated if the SPUTOVAMO checklist could detect potential physical abuse, sexual abuse, emotional abuse, neglect, or exposure to intimate partner violence in children under 18 years of age presenting to either out-of-hours primary care locations [24] or an emergency department [42] in the Netherlands. Two separate studies reported on the use of the SPUTOVAMO checklist to assess for potential exposure to physical abuse in children under 8 years of age presenting to the emergency department with a physical injury [27] or children under 18 years of age presenting to a burn centre with burn injuries [28]. Two studies evaluated the Escape instrument [25,26], a screening instrument very similar in content and structure to the SPUTOVAMO checklist. The Escape instrument involves five questions (e.g., is the history consistent?) that are used to assess for potential physical abuse, sexual abuse, emotional abuse, neglect, and exposure to intimate partner violence in children under 16 years of age [25] or 18 years of age [26] presenting to an emergency department.
Three studies [36,37,39] reported on use of a 6-item screening questionnaire for child sex trafficking, where an answer to two or more questions (e.g., Has the youth ever run away from home?) indicated suspicion of a child being exposed to sex trafficking. The studies tested the screening questionnaire in children of a similar age group (10,11, or 12 to 18 years of age) presenting to emergency departments [36,37,39], child advocacy centres or teen clinics [37].
Five studies [24][25][26][27]42] had samples where the prevalence of child maltreatment could be estimated. In other words, each study's included sample was similar enough (e.g., children less than 18 years presenting to the emergency department) to match 2% to 10% prevalence estimates found in emergency departments [20]. As shown in Table 3, the Sittig et al. [27] study, which evaluated the SPUTOVAMO checklist, found that per 100 children tested, 0 potentially physically abused children were missed and 0 to 2 potentially neglected children were missed. Twelve to 13 children were falsely identified as potentially physically abused or neglected.
The other studies suffered from verification or incorporation bias leading to a sensitivity estimate that is too high (underestimating false negative estimates) and a specificity estimate that is too high (underestimating false positive estimates). These studies [24][25][26]42] found that per 100 children tested, 0 to 9 potentially maltreated children were missed and 2 to 69 children were falsely identified as potentially maltreated. For the studies that evaluated the SPUTOVAMO checklist specifically [24,42], 0 to 9 potentially maltreated children were missed and 2 to 69 children were falsely identified as potentially maltreated. For the studies that evaluated the Escape tool [25,26], 0 to 2 children were missed and 2 children were falsely identified as potentially maltreated.
---
Modelling service outcomes of the SPUTOVAMO checklist for physical abuse or neglect based on a population estimate
After using a screening tool, children will receive some type of service depending on the results. We modelled what would happen to children after the use of the SPU-TOVAMO checklist on a population level per 100,000 children (see supplemental file 5 for modelling using the Escape instrument).
When using the SPUTOVAMO checklist, providers may correctly identify 2000 children potentially exposed to physical abuse and 1666 potentially exposed to neglect. American estimates [21] suggest 17% of children who are reported to child welfare are substantiated and 62% of substantiated children receive post-investigation services. Using these estimates, this means that some form of post-investigative services may be received by 211 children with substantiated physical abuse and 176 children with substantiated neglect.
No children exposed to potential physical abuse and 334 children who have been exposed to potential neglect would be missed. Since an estimated 25 to 50% of children who are exposed to maltreatment need services for mental health symptoms [21], 84 children potentially exposed to neglect would not be referred for the mental health services they need.
In addition, we calculated that 13,230 children would be misidentified as potentially physically abused and 13, 034 children would be misidentified as potentially neglected. Although these children would likely receive an assessment by a qualified physician that would determine they had not experienced maltreatment, all of these children could undergo a stressful and unwarranted child protection services investigation.
---
Medical child maltreatment instrument
Greiner et al. [31] evaluated a "medical child maltreatment" instrument (also known as caregiver fabricated illness in a child [57] or factitious disorder imposed on another [58]), where a positive answer to four or more of the 15 questions indicated suspicion of medical child maltreatment (e.g., caregiver has features of Munchausen syndrome (multiple diagnoses, surgeries, and hospitalizations, with no specific diagnosis)).
---
Triggers in an electronic medical record
Berger et al. [35] evaluated "triggers" added to an electronic medical record to help identify children under 2 years of age at risk for physical abuse (e.g., a "yes" response to "Is there concern for abuse or neglect?" in the pre-arrival documentation by a nurse; documentation of "assault" or "SCAN" as the chief complaint). This study suffers from serious verification bias, since only abused children and a small, non-random sample (n = 210) were evaluated by the reference standard.
---
Clinical predication rules and predictive symptoms
Five studies (published in six articles) evaluated four clinical prediction tools (Burns Risk Assessment for Neglect or Abuse Tool, Pediatric Brain Injury Research Network clinical prediction rule, Predicting Abusive Head Trauma, and Hymel's 4-or 5-or 7-variable prediction models).
Kemp et al. [40] investigated the Burns Risk Assessment for Neglect or Abuse Tool, a clinical prediction rule to assist with the recognition of suspected maltreatment, especially physical abuse or neglect. Hymel et al. evaluated a five-variable clinical prediction rule (derivation study) [34] and a four-variable clinical prediction rule (validation study) [33] in identifying potential abusive head trauma in children less than 3 years of age who were admitted to the post-intensive care unit for management of intracranial injuries. An additional article by Hymel et al. [38] combined the study populations in the derivation and validation studies in order to evaluate a seven-variable clinical prediction rule in identifying potential abusive head trauma. The seven-variable clinical prediction rule used seven indicators to predict potential abusive head trauma (e.g., any clinically significant respiratory compromise at the scene of injury, during transport, in the emergency department, or prior to admission).
Pfeiffer et al. [41] evaluated the Pediatric Brain Injury Research Network clinical prediction rule. This clinical prediction rule evaluated the likelihood of abusive head trauma in acutely head-injured children under 3 years of age admitted to the post-intensive care unit. The authors recommended that children who presented with one or more of the following four predictor variables should be evaluated for abuse (respiratory compromise before admission; any bruising involving ears, neck, and torso; any subdural hemorrhages and/or fluid collections that are bilateral or interhemispheric; any skull fractures other than an isolated, unilateral, nondiastatic, linear parietal skull fracture).
Two studies evaluated different predictive symptoms of abusive head trauma (parenchymal brain lacerations and hematocrit levels ≤30% on presentation). Palifika et al. [29] examined the frequency of lacerations in children less than 3 years of age who had abusive head trauma (as determined by the institutional child abuse team) compared with accidentally injured children with moderate-to-severe traumatic brain injury. For children under 5 years of age who were admitted to one of two level-one pediatric trauma centres with a diagnosis of traumatic brain injury, Acker et al. [32] identified hematocrit values of 30% or less as a finding that should prompt further investigation for potential abusive head trauma.
---
Discussion
This review updates and expands upon the systematic review published by Bailhache et al. [13] and was conducted to evaluate the effectiveness of strategies for identifying potential child maltreatment. Since the publication of Bailhache et al.'s [13] systematic review, there have been 18 additional studies published. The included studies reported the sensitivity and specificity of three screening tools (the SPUTOVAMO checklist, the Escape instrument, and a 6-item screening questionnaire for child sex trafficking), as well as the accuracy of an identification tool for medical child maltreatment, "triggers" embedded in an electronic medical record, four clinical prediction tools (Burns Risk Assessment for Neglect or Abuse Tool, Pediatric Brain Injury Research Network clinical prediction rule, Predicting Abusive Head Trauma, and Hymel's 4-or 5-or 7-variable prediction models), and two predictive symptoms of abusive head trauma (parenchymal brain lacerations and hematocrit levels ≤30% on presentation). As the Bailhache et al. [13] systematic review identified no screening tools, the creation of the SPUTOVAMO checklist, Escape instrument, and 6-item child sex trafficking screening questionnaire represents a notable development since their publication. The recent creation of an identification tool for child sex trafficking also reflects current efforts to recognize and respond effectively to this increasingly prevalent exposure. Aside from these new developments, many of the other points discussed by Bailhache et al. [13] were confirmed in this update: it is still difficult to assess the accuracy of instruments to identify potential child maltreatment as there is no gold standard for identifying child maltreatment; what constitutes "maltreatment" still varies somewhat, as does the behaviours that are considered abusive or neglectful (e.g., we have excluded children's exposure to intimate partner violence, which is increasingly considered a type of maltreatment); and it is still challenging to identify children early in the evolution of maltreatment (many of the identification tools discussed in this review are not intended to identify children early and as such children are already experiencing significant consequences of maltreatment).
The studies included in this systematic review provide additional evidence that allow us to assess the effectiveness of strategies for identifying potential exposure to maltreatment. Based on the findings of this review (corresponding with the findings of Bailhache et al.'s [13] review), we found low certainty evidence and high numbers of false positives and negatives when instruments are used to screen for potential child maltreatment. Although no studies assessed the effect of screening tools on child well-being outcomes or recurrence rates, based on data about reporting and response rates [21,22], we can posit that children who are falsely identified as potentially maltreated by screening tools will likely receive a CPS investigation that could be distressing. Furthermore, maltreated children who are missed by screening tools will not receive or will have delayed access to the mental health services they need.
We identified several published instruments that are not intended for use as screening tools, such as clinical prediction rules for abusive head trauma. Clinical prediction tools or rules, such as Hymel's variable prediction model, combine medical signs, symptoms, and other factors in order to predict diseases or exposures. While they may be useful for guiding clinicians' decision-making, and may be more accurate than clinical judgement alone [59], they are not intended for use as screening tools. Instead, the tools "act as aids or prompts to clinicians to seek further clinical, social or forensic information and move towards a multidisciplinary child protection assessment should more information in support of AHT [abusive head trauma] arise" [41]. As all identification tools demand clinician time and energy, widespread implementation of any (or a) clinical prediction tool is not warranted until it has undergone three stages of testing: derivation (identifying factors that have predictive power), validation (demonstrating evidence of reproducible accuracy), and impact analysis (evidence that the clinical prediction tool changes clinician behaviour and improves patient important outcomes) [60]. Similar to the findings of a recent systematic review on clinical prediction rules for abusive head trauma [41], in this review we did not find any clinical prediction rules that had undertaken an impact analysis. However, several recent studies have considered the impact of case identification via clinical prediction rules. This includes assessing if the Predicting Abusive Head Trauma clinical prediction rule alters clinicians' abusive head trauma probability estimates [61], emergency clinicians' experience with using the Burns Risk Assessment for Neglect or Abuse Tool in an emergency department setting [62], and cost estimates for identification using the Pediatric Brain Injury Research Network clinical predication rule as compared to assessment as usual [63]. Additional research on these clinical predication rules may determine if such rules are more accurate than a clinician's intuitive estimation of risk factors for potential maltreatment or how the tool impacts patient-important outcomes.
Many of the included studies had limitations in their designs, which lowered our confidence in their reported accuracy parameters. Limitations in this area are not uncommon. A recent systematic review by Saini et al. [64] assessed the methodological quality of studies assessing child abuse measurement instruments (primarily studies assessing psychometric properties). The authors found that "no instrument had adequate levels of evidence for all criteria, and no criteria were met by all instruments" [64]. Our review also resulted in similar findings to the original review by Bailhache et al. [13], in that 1) most studies did not report sufficient information to judge all criteria in the risk of bias tool; 2) most studies did not clearly blind the analysis of the reference standard from the index test (or the reverse); 3) some studies [26,36,37,39] included the index test as part of the reference standard (incorporation bias), which can overestimate the accuracy of the index test; and 4) some studies used a case-control design [29,31,36], which can overestimate the performance of the index test. A particular challenge, also noted by Bailhache et al. [13], was the quality of reporting in many of the included studies. Many articles failed to include clear contingency tables in reporting their results, making it challenging for readers to fully appreciate missing values and potentially inflated sensitivity and specificity rates. For example, one study evaluating the SPUTOVAMO checklist reported 7988 completed SPUTOVAMO checklists. However, only a fraction of these completed checklists were evaluated by the reference standard (verification bias, discussed further below) (193/7988, 2.4%) and another reference standard (a local CPS agency) was used to evaluate an additional portion of SPUTOVAMO checklists (246/ 7988, 3.1%). However, the negative predictive and positive predictive value calculations were based on different confirmed cases. Ideally missing data and indeterminate values should be reported [23]. Researchers have increasingly called for diagnostic accuracy studies to report indeterminate results as sensitivity analysis [65].
Verification bias was a particular study design challenge in the screening studies identified in this review. For example, Dinpanah et al. [25] examined the accuracy of the Escape instrument, a five-question screener applied in emergency department settings, for identifying children potentially exposed to physical abuse, sexual abuse, emotional abuse, neglect, or intimate partner violence. The authors report a sensitivity and specificity of 100 and 98 respectively. While the accuracy was high, their study suffered from serious verification bias as approximately 137 out of 6120 (2.2%) of children suspected of having been maltreated received the reference standard. For the children who did not receive the reference standard, there is no way to ascertain the number of children who were potentially maltreated, but unidentified (false negatives). Furthermore, as inclusion in this study involved a convenience sample of children/ families who a) gave consent for participation and b) cooperated in filling out the questionnaire, we do not know if the children in this study were representative of their study population. In addition, unlike screening tools for intimate partner violence [66,67], none of the screening for possible maltreatment tools have been evaluated through randomized controlled trials; as such, we have no evidence about the effectiveness of such tools on reducing recurrence of maltreatment or improving child well-being.
This review identified one study which evaluated a screening tool that did not suffer from serious verification bias or incorporation bias. Sittig et al. [27] evaluated the ability of the SPUTOVAMO five-question checklist to identify potential physical abuse or neglect in children under the age of 8 years who presented to an emergency department with any physical injury. While no children exposed to potential physical abuse were missed by this tool, at a population level a large number of children were falsely identified as potentially physically abused (over 13,000); furthermore, at a population level, many children potentially exposed to neglect were missed by this tool (334 per 100,000). Qualitative research suggests that physicians report having an easier time detecting maltreatment based on physical indicators, such as bruises and broken bones, but have more challenges identifying less overt forms of maltreatment, such as 'mild' physical abuse, emotional abuse, and children's exposure to intimate partner violence [68]. The authors of this study suggest that the SPUTOVAMO "checklist is not sufficiently accurate and should not replace skilled assessment by a clinician" [27].
The poor performance of screening tests for identifying children potentially exposed to maltreatment that we found in this review leads to a similar conclusion to that reached for the World Health Organization's Mental Health Gap Action Programme (mhGAP) update, which states that "there is no evidence to support universal screening or routine inquiry" [69]. Based on the evidence, the mhGAP update recommends that, instead of screening, health care providers use a case-finding approach to identify children exposed to maltreatment by being "alert to the clinical features associated with child maltreatment and associated risk factors and assess for child maltreatment, without putting the child at increased risk" [69]. As outlined in the National Institute for Health and Clinical Excellence (NICE) guidance for identifying child maltreatment, indicators of possible child maltreatment include signs and symptoms; behavioural and emotional indicators or cues from the child or caregiver; and evidence-based risk factors that prompt a provider to consider, suspect or exclude child maltreatment as a possible explanation for the child's presentation [70]. The NICE guidance includes a full set of maltreatment indicators that have been determined based on the results of their systematic reviews [70]. This guidance also discusses how providers can move from "considering" maltreatment as one possible explanation for the indicator to "suspecting" maltreatment, which in many jurisdictions invokes a clinician's mandatory reporting duty. In addition, there are a number of safety concerns that clinicians must consider before inquiring about maltreatment, such as ensuring that when those children who are of an age and developmental stage where asking about exposure to maltreatment is feasible, this should occur separately from their caregivers and that systems for referrals are in place [71].
The findings of this review have important policy and practice implications especially since, as noted in the introduction, there is an increasing push to use adverse childhood experiences screening tools in practice [15,16]. While we are not aware of any diagnostic accuracy studies evaluating adverse childhood experiences screening tools, it is unclear how these tools are being used in practice, or how they will in the future be used in practice [72]. For example, does a provider who learns a child has experienced maltreatment via an adverse childhood experiences screener then inform CPS authorities? What services is the child entitled to based on the findings of an adverse childhood experiences screener, if the child indicates they have experienced child maltreatment along with other adverse experiences? The findings of the present review suggest that additional research is needed on various child maltreatment identification tools (further accuracy studies, along with studies that assess acceptability, cost effectiveness, and feasibility) before they are implemented in practice. The findings also suggest the need for more high-quality research about child maltreatment identification strategies, including well-conducted cohort studies that follow a sample of children identified as not maltreated (to reduce verification bias) and randomized controlled trials that assess important outcomes (e.g., recurrence and child wellbeing outcomes) in screened versus non-screened groups. The results of randomized controlled trials that have evaluated screening in adults experiencing intimate partner violence underscore the need to examine the impacts of screening [66,67]. Similar trials in a child population could help clarify risks and benefits of screening for maltreatment. Future systematic reviews that assess the accuracy of tools that attempt to identify children exposed to maltreatment by evaluating parental risk factors (e.g., parental substance use) would also complement the findings of this review.
---
Strengths and limitations
The strengths of this review include the use of a systematic search to capture identification tools, the use of an established study appraisal checklist, calculations of false positives and negatives per 100 where prevalence estimates were available (which may be more useful for making clinical decisions than sensitivity and specificity rates), and the use of GRADE to evaluate the certainty of the overall evidence base. A limitation is that we included English-language studies only. There are limitations to the evidence base, as studies were rated as unclear or high risk of bias and the overall certainty of the evidence was low. Additional limitations include our reliance on 2 and 10% prevalence rates commonly seen in emergency departments [20] and our use of American estimates to model potential service outcomes following a positive screen (e.g., number of children post-investigation who receive services). These prevalence rates likely do not apply across different countries where prevalence rates are unknown. For example, one study evaluated the Escape instrument in an Iranian emergency department. While the authors cite the 2 to 10% prevalence rate in their discussion [25], we are unaware of any studies estimating prevalence of child maltreatment in Iranian emergency departments. When known, practitioners are encouraged to use the formulas in the methods section (or to use GRADEpro) to estimate false positives and negatives based on the prevalence rates of their setting, as well as known estimates for service responses in their country, in order to make informed decisions about the use of various identification strategies. Furthermore, our modelling of services outcomes assumes that 1) all positives screens will be reported and 2) that reports are necessarily stressful/negative. While many of the included studies that used CPS as a reference standard reported all positive screens, it is unclear if this would be common practice outside of a study setting (i.e., does a positive screen trigger one's reporting obligation?). Further research is needed to determine likely outcomes of positive screens. It is also important to recognize that while reviews of qualitative research do identify that caregivers and mandated reporters have negative experiences and perceptions of mandatory reporting (and associated outcomes), there are some instances where reports are viewed positively by both groups [68,73]. Finally, because our review followed the inclusion/exclusion criteria of Bailhache et al. [13] and excluded studies that did not explicitly set out to evaluate sensitivity, specificity, positive predictive values or negative predictive values, it is possible that there are additional studies where such information could be calculated.
---
Conclusion
There is low to very low certainty evidence that the use of screening tools may result in high numbers of children being falsely suspected or missed. These harms may outweigh the potential benefits of using such tools in practice. In addition, before considering screening tools in clinical programs and settings, research is needed that identifies patient-important outcomes of screening strategies (e.g., reduction of recurrence).
---
Availability of data and materials
All data is available within this article, supplemental material or via the references.
---
Supplementary information
Supplementary information accompanies this paper at https://doi.org/10. 1186/s12887-020-2015-4.
---
Additional file 1. PRISMA Checklist
---
Additional file 2. Example search strategy
---
Additional file 3. Study and participant characteristics of interest
---
Additional file 4. Critical appraisal rankings
Additional file 5. Consequences of screening per 100,000 children Abbreviations CPS (Child Protective Services): A short form for governmental agencies responsible for providing child protection, including responses to reports of maltreatment; GRADE (Grading of Recommendations, Assessment, Development and Evaluation): The GRADE process involves assessing the certainty of the best available evidence and is often used to support guideline development processes; mhGAP (Mental Health Gap Action Programme): A program launched by the World Health Organization's Mental Health Gap Action Programme, in order to facilitate the scaling up of care for mental, neurological, and substance use disorders; the program is comprised of evidence-based guidelines and practical intervention guides used to assist in the implementation of guideline principles; NICE (National Institute for Health and Care Excellence): An executive non-departmental body operating in the United Kingdom that provides national guidance and advice to improve health and social care; QUADAS-2 (Quality Assessment of Studies of Diagnostic Accuracy-2): A tool for evaluating the quality of diagnostic accuracy studies Authors' contributions JRM conceptualized and designed the review, carried out the analysis, and drafted the initial manuscript. HLM assisted with conceptualizing the review. AG and JCDM checked all data extraction. NS was consulted regarding the GRADE analysis. JCDM and CM assisted with preparing an earlier draft of the review, including interpretation of data. All authors made substantial contributions to revising the manuscript and all authors approved of the manuscript as submitted. Ethics approval and consent to participate Not applicable.
---
Consent for publication
Not applicable.
---
Competing interests
The authors declare that they have no completing interests.
---
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 50,344 | 2,013 |
67a43166c1635adeef7b22b26979657c304768ab | Habits and the socioeconomic patterning of health-related behaviour: a pragmatist perspective | 2,023 | [
"JournalArticle"
] | Unhealthy behaviours are more prevalent in lower than in higher socioeconomic groups. Sociological attempts to explain the socioeconomic patterning of healthrelated behaviour typically draw on practice theories, as well as on the concept of lifestyles. When accounting for "sticky" habits and social structures, studies often ignore individuals' capacity for reflection. The opposite is also true: research on individual-level factors has difficulty with the social determinants of behaviour. We argue that the pragmatist concept of habit is not only a precursor to practice theories but also offers a dynamic and action-oriented understanding of the mechanisms that "recruit" individuals to health-related practices. In pragmatism, habits are not merely repetitive behaviours, but creative solutions to problems confronted in everyday life and reflect individuals' relationships to the material and social world around them. Ideally, the pragmatist conception of habits lays the theoretical ground for efficient prevention of and effective support for behaviour change. | Introduction
Why do people stick with their unhealthy habits despite adverse consequences? This is a pressing question for both public health research and policy-makers. For example, the overweight and obesity prevalence has been steadily growing in all Western societies (Ng et al. 2014). Smoking continues to be a major public health problem even though its health risks are widely recognised (Reitsma et al. 2017) and many behaviours that are acknowledged being essential for healthy lifestyles have not been Habits and the socioeconomic patterning of health-related… universally adopted, such as getting enough exercise or eating sufficient amounts of vegetables (Spring et al. 2012). Since risky behaviours are more prevalent in lower socioeconomic groups, understanding why unhealthy behaviours are so resistant to change is vital to tackle inequalities in health. In this article, we argue that there is a theoretical tradition which has been unexplored in this context even though it is well suited for examining the core questions of health behaviour research. This tradition is pragmatism and its conception of habits, which offers a dynamic and action-oriented understanding of the mechanisms that "recruit" individuals to risky health-related behaviours.
Health-related behaviour is often understood as an issue having to do with the individual and guided by motivations, intentions, self-efficacy and expectations, as it is the case with influential and widely used planned behaviour theories (Ajzen 1985) and the health belief model (Strecher and Rosenstock 1997). In this line of thinking, individuals behave the way they do because their intentions, knowledge, beliefs or motives lead them to do so (Cohn 2014;Nutbeam and Harris 2004). The individualised approach is especially visible in many psychological theories of behaviour change and in interventions and programmes designed on the basis of these theories (Baum 2008;Blue et al. 2016). Behavioural interventions have become increasingly important in public health promotion despite weak evidence for their overall effectiveness in generating long-lasting changes in behaviour and their potential to reduce inequalities in health (Baum and Fisher 2014;Jepson et al. 2010).
Research on social determinants of health often takes a critical stance towards psychological theories and recognises social structures as key contributors to health and health-related behaviours (Mackenbach 2012;Marmot and Wilkinson 2006). In health sociology, concepts and measures related to power, cultural norms, social circumstances, societal hierarchies, and material resources, for instance, are used to refer to structural constraints and modifiers of individual action and related outcomes. A large body of research has shown that education, occupational status, financial resources, living area, gender and ethnicity all affect ill health and life expectancy and the ways in which individuals act upon their health. The better off people are, the more likely they are to lead healthy lives and adopt healthy lifestyles (Marmot et al. 2010;Pampel et al. 2010).
While social structure undoubtedly constrains people's behaviour, people can also exert agency, as they are able to consider different options and to act in discordance with their structural predispositions and social circumstances (Mollborn et al. 2021). The key question in sociological theory is, thus, how individual behaviour can be simultaneously understood as shaped by social structures and as governed by individual choices. It is not enough to state that both social structures and individual intentions are important in explaining behavioural outcomes. One also needs to understand how and why social structures enable or generate particular kinds of behaviour within the context of people's everyday lives. Sociological theorisation on health inevitably falls short if it fails to confront this issue, thus leading to an insufficient understanding of factors that shape health-related behaviours (Williams 2003).
In this article, we first take a look at sociological theories of health-related behaviour, to which the concepts of lifestyle and, more recently, social practices have been central. Then we move on to discuss the pragmatist concept of habit. The concept of habit has often been used in research on health-related behaviours and behavioural change, and it has proved to be useful in explaining continuities in behaviour (Gardner 2015;Lindbladh and Lyttkens 2002). We argue that previous research has not taken into account the pragmatist understanding of the concept as an important contribution to theorisations of health lifestyles and practices. Pragmatism's dynamic and action-oriented understanding of habits helps in conceptualizing how practices are formed in interaction with material and social conditions and what the mechanisms are by which practices recruit individuals.
In pragmatism, habits are understood in terms of problem-solving; they are active and creative solutions to practicalities of everyday life and responsive to change, not mere blind routines. We, therefore, focus on the creative and active nature of habit formation, which can be understood as mechanisms by which behavioural patterns emerge. The pragmatist approach not only opens new perspectives in health research but can also give new tools for preventing non-communicable diseases and reducing inequalities in health. Next, we discuss theories of lifestyles and social practices and go on to show how the pragmatist theory of habits anticipated many of these insights (historically speaking) but also developed its own framework for analysing the inherent habituality of action.
---
The interplay between structure and agency: lifestyles and social practices
Attempts to bridge the gap between social structures and individual action in health sociology often draw from a loose tradition of practice theories. They are all based on the attempt to overcome methodological individualism without leaning too much towards methodological holism (Maller 2015). This means that practice theories try to take into account both individual action (methodological individualism) and the role social structures play in explaining action (methodological holism). From the perspective of health sociology, the fundamental question is how to understand the interplay between individual agency and structural factors in health-related matters, such as smoking, drinking or food consumption. In this respect, two concepts have been central: lifestyles and social practices.
Biomedical or social epidemiological approaches, which dominate health inequality research, typically frame lifestyle as a set of individual, volitional behaviours (Korp 2008). Lifestyle is thus a sum of individual health-related behaviours, such as ways of consuming alcohol or dietary habits. In sociological literature, lifestyle is seen as a collective attribute: lifestyles are shared understandings and ways of operating in the world that have been generated in similar social circumstances (Frohlich et al. 2001). They develop over the life-course (Lawrence et al. 2017;Banwell et al. 2010) and are shaped by social and material conditions (Cockerham 2005). As such, lifestyles are not merely outcomes of choices or personal motives and preferences, but they reflect an individual's position in a wider social structure and are fundamentally shaped by those structures. Cockerham (2009, p. 159) defines health lifestyles as "collective patterns of health-related behaviour based on choices from options available to people according to their life chances". In his Health Lifestyle Theory, Cockerham draws from Max Weber's concept of lifestyles, in which lifestyle-related choices are seen as voluntary but constrained and enabled by life chances that are essentially structural: similar life chances tend to generate similar patterns of voluntary action, thus generating patterns of behaviour (Cockerham 2009). Cockerham (2013) considers life chances as consisting of a variety of structural determinants, such as class circumstances, age and gender, which collectively influence agency and choices. The interaction between choices and chances constitutes dispositions to act and resulting lifestyles may have varying effects on health. Health-related behaviour is shown to be clustered within individuals and by socioeconomic status (De Vries et al. 2008;Portinga 2007), yet health lifestyles are rarely uniformly health-promoting or healthcompromising, and there is a considerable amount of variation in health behaviour between individuals with similar socioeconomic characteristics (Mollborn and Lawrence 2018;Pronk et al. 2004).
Cockerham's approach, like many other approaches to health-related behaviours (Williams 1995;Frohlich et al. 2001;Carpiano 2006;Gatrell et al. 2004;Korp 2008), draws on Pierre Bourdieu's concept of habitus. Habitus is a set of dispositions that generate class-specific ways of operating in the world (Bourdieu 1984, pp. 101-102). Habitus develops during the socialisation process in interaction with social circumstances and social relations, and it generates tastes, choices and practices that are subjectively meaningful in given contexts. Accordingly, people accommodate their desired way of life in accordance with their assessment of their circumstances and available resources (Cockerham 2005).
From a Bourdieusian perspective, health lifestyles are a product of life conditions and available resources, as well as people's preferences and tastes, which are formed in class-specific circumstances. People's dietary patterns, leisure activities and ways of consuming alcohol therefore reflect class relations and distinctions. Bourdieu's ideas on habitus and practices highlight how people's day-to-day activities tend to be, to a great extent, routine-like and taken for granted: once established, a habitus governs behaviour, enabling everyday practices to be acted out without conscious deliberation. Thus, Bourdieu's approach explains why lifestyles are not random by underlining the importance of class-specific social conditions internalised in the habitus.
Bourdieu's approach has been repeatedly criticised for exaggerating objective social structures at the expense of agency and reflexivity (e.g., Adams 2006;Frohlich et al. 2001;Archer 2005). Critics have claimed that Bourdieu's concept of habitus does not allow for voluntary action and thus assumes that existing social structures are reproduced almost automatically. While Bourdieu acknowledges the importance of agency, he still prioritises structural determinants of action at the expense of individual choices, preferences and subjective understandings (Jenkins 1992). In more recent discussions, however, the notions of reflexivity and flexibility of habitus have been more central and the idea of an over-controlling habitus has been rejected (Cockerham 2018). Silva (2016) has noted that Bourdieu's conception of habitus changed over time so that in his later work habitus is more 'elastic' compared to his earlier work. In fact, Bourdieu's later ideas of the role of reflexivity in situations when habitus and field collide are very close to pragmatism (Bourdieu 1990;Bourdieu and Wacquant 1992;Crossley 2001). Yet, Bourdieu gives priority to social class in the process of lifestyle formation. This means that socioeconomic status determines to a great extent what people do (Gronow 2011). The impression that structures determine can be seen as a result of Bourdieu's emphasis on classrelated determinants of action. Regarding the possibility to modify health-related habits, Crammond and Carey (2017) have emphasized that Bourdieu's notion of habitus does not give credit to public health initiatives or to changing conditions for influencing habitus and behaviour.
More recently, the concept of social practices has been suggested as a general conceptual framework for analysing and understanding health-related behaviour. While there is a variety of so-called practice theories and no integrated theory of practice exist, we concentrate on practice theoretical approaches and applications that have been central to the fields of consumption (e.g. Warde 2005; Shove 2012) and health sociology (e.g., Blue et al. 2016;Maller 2015;Meier et al. 2018;Delormier et al. 2009). In these fields, Reckwitz's (2002) influential article is commonly cited as the source for defining social practices as routine-like behaviour which consist of several interrelated elements, such as bodily and mental functions, objects and their use, knowledge, understanding and motivation (ibid., p. 249). According to Shove et al. (2012), practices integrate three elements: materials (objects, goods and infrastructures), competences (understandings, know-how) and meanings (social significance, experiences). Practices can refer to any form of coordinated enactment: preparing breakfast, having a break at work or having after-work drinks. Similar to lifestyles, social practices turn attention away from the individual and their intentions and motives towards the routinised ways people carry out their daily lives (Warde 2005). The idea is to look at people as carriers of practices because practices guide human action according to their own intrinsic logic (Reckwitz 2002). In other words, practices are relatively stable ways of carrying out a set of elements in an integrated manner. It follows, therefore, that they are both performances enacted more or less consistently in daily life, as well as entities that shape the lives of their carriers (Shove et al. 2012).
The social practices approach points out how smoking, drinking and eating should not be seen merely as single behaviours, but rather as parts of collectively shared practices, which intersect with other everyday routines (Mollborn et al. 2021). For example, in understanding drinking behaviour, one cannot separate the act of drinking from other aspects of the drinking situation, such as the kind of alcohol being consumed, how, where and with whom it is done, and for what purposes (Meier et al. 2018;Maller 2015). Drinking, smoking and eating, accordingly, are not single entities but parts of different kinds of practices, performed and coordinated with other activities of daily life (Blue et al. 2016).
As the main aim of practice theoretical approaches is to explain the stability and continuities of behaviour, the approach has difficulties in grasping the role of individual agency in the enactment of practices. According to critics, in some versions of practice theory, the role of individual carriers and the ways in which they make sense and experience practices seems to be more or less neglected (Spaargaren et al. 2016;Miettinen et al. 2012). Consideration of individuals' sense of doing things is particularly important when studying aspects of human behaviour that can have adverse consequences and are unequally distributed within society. Therefore, we argue that the practice theoretical approach would benefit from more theorization on individual agency and the mechanisms by which individuals adopt and become carriers of practices. For health sociology, the question of how practices change and how people are recruited as carriers of practices is particularly relevant: how can healthy practices be adopted or how can practices be modified to become healthier? We argue that these issues were fruitfully conceptualized by the philosophical tradition of pragmatism with its concept of habits, which takes the individual actor as a premise without losing sight of the force of everyday routines.
---
Habits as dispositions
In recent decades, pragmatism has become an important source of inspiration for many social theorists (e.g., Joas 1996;Baert 2005;Shilling 2008). For example, Joas (1996) has argued that pragmatists had a unique viewpoint on the creativity of action, whereas for Gross (2009) pragmatism is a key point of departure when discussing social mechanisms. Pragmatism has been previously introduced to health research, for example, in relation to the epistemological problems of different kinds of health knowledge (Cornish and Gillespie 2009) and health services research (Long et al. 2018). Here, we focus on the aspect of pragmatist thought we find most relevant for health sociology, namely, its concept of habits.
Classical pragmatist philosophers were active at the end of the nineteenth and the beginning of the twentieth century. They included the likes of George Herbert Mead, William James, Charles S. Peirce and John Dewey. We mainly draw inspiration from John Dewey for his insights into the notion of habit. However, all classical pragmatists shared a similar understanding of the essential role habits play in explaining action (Kilpinen 2009). Thus, even though classical pragmatists may have differed in their point of emphasis, Dewey's notion of habits is in many ways representative of the classical pragmatist understanding of habits. In this conceptualisation, habits are acquired dispositions to act in a certain manner, but they do not preclude conscious reflection.
Pragmatism, like the social practices approach, puts emphasis on contextual factors and the environments of action in understanding how habits are formed and maintained. Thus, one can argue that pragmatists were precursors to practice theorists. First and foremost, pragmatists highlighted the interaction between environments, habits and actors, by pointing out that people are constantly in the midst of ongoing action. Pragmatism also has an affinity with behaviourist psychology, which emphasises the role of environmental cues in triggering action. Behaviourists maintain that once an actor is conditioned to a reaction in the presence of a particular stimulus, the reaction automatically manifests itself when the stimulus is repeated. Say, a smoker might decide to give up smoking but the presence of familiar cues (e.g. cigarettes sold at the local grocery store, workmates who smoke) automatically triggers a response that results in a relapse. Classical pragmatists also thought that everything we do is in relation to certain environmental stimuli, but they did not think of the relationship in such mechanical, automatic terms (Mead 1934).
What acts as a stimulus depends on the part the stimulus plays in one's habits rather than on simple conditioning (Dewey 1896). Thus, people are not simple automata that react to individual stimuli in a piecemeal fashion but rather creatures of habit. This means that individual actions get their meaning by being a part of habits (Kilpinen 2009). What may trigger the smoker's relapse is not the presence of isolated cues but the habits that they are a part of; having a morning coffee, passing by or going to the local bars and grocery stores, and taking a break at work. Habits make the associated cues familiar and give them meaning.
The term habit, both in sociological literature and in common usage, typically refers to an action that has become routine due to repeated exposure to similar environmental stimuli. In this conception, the behaviour in question may originally have been explicitly goal-directed, but by becoming habituated, it becomes an unconscious, non-reflexive routine. As such, habits interfere with individuals' ability to act consciously. In practice theoretical approaches habit is similarly paralleled with routine-like ways of doing things. According to Southerton (2013), habits can be viewed as "observable performances of stable practices" (Southerton 2013, p. 337), which are essential for practices to remain stable (Maller 2015). In addition, habits are often understood as routines in popular science. According to Duhig (2012), the habit "loop" consists of the association between routines and positive rewards.
Pragmatists tend to see habits somewhat differently-as inner dispositions. This conceptual move means that habits have a "mental" component and habits can exist as tendencies even when not overtly expressed. Habits are thus action dispositions rather than the observable behaviour to which they may give rise to (Cohen 2007). As tendencies, habits include goals of action and not mere overt expressions of action; in other words, they are projective, dynamic and operative as dispositions even when they are not dominating current activities (Dewey 1922, p. 41). Habits make one ready to act in a certain way, but this does not mean that one would always act accordingly (Nelsen 2015). To paraphrase Kilpinen (2009, p. 110), habits enter ongoing action processes in a putative form and we critically review them by means of self-control. In this way, habits are means of action: habits "project themselves" into action (Dewey 1922, p. 25) and do not wait for our conscious call to act but neither are they beyond conscious reflection. According to classical pragmatists, habits thus do not dictate our behaviour. Rather, habits constitute the so-called selective environment of our action. They give rise to embodied responses in the environments in which they have developed but, as dispositions, habits are tendencies to act in a certain manner, not overt routines that would always manifest themselves in behaviour. What distinguishes habits from inborn instincts is their nature as acquired dispositions.
Moreover, habits guide action and make different lines of conduct possible. This is easy to see in the case of skills that require practice; for example, being skilful in the sense that one habitually knows the basic manoeuvres, say, in tennis, does not restrict action but rather makes continuous improvement of the skill in question possible. Simply reading books on tennis does not make anyone a good player of tennis and therefore actual playing is required for habit formation. Furthermore, once habits are acquired as dispositions, not playing tennis for a while does not mean that the habits and related dispositions would immediately disappear.
In the pragmatist understanding, habits are not the opposite of agency but rather the foundation upon which agency and reflexive control of action are built. Purely routine habits do, of course, also exist but they tend to be "unintelligent" in Dewey's conceptualisation because they lack the guidance of reflective thought. Furthermore, Dewey (1922, p. 17) argued that conduct is always more or less shared and thus social. This also goes for habits, since they incorporate the objective conditions in which they are born. Action is thus already "grouped" in the sense that action takes place in settled systems of interaction (ibid., p. 61). This is where Dewey's ideas resemble practice theory most because the grouping of action into settled systems of interaction can be interpreted to indicate the kinds of enactments that practice theory is interested in. While repeated action falls within the purview of habits, Dewey (1922) was adamant that habits are dispositions rather than particular actions; the essence of habit is thus an acquired predisposition to particular ways or modes of responding in a given environment. Compared with practice theories, this notion of habits underscores competences (understandings, know-how) and meanings (social significance, experiences).
Because habits are dispositions, they are the basis on which more complicated clusters of habits and, thus, practices, can be built. This means that practices can recruit only those who have the habits that predispose them to the enactments related to a practice.
---
Habits as practical solutions
In the previous section, we explained that pragmatists did not think of habits as mere routines. To be more precise, Dewey distinguished between different kinds of habits on the grounds of the extent of their reflexivity. Dewey labelled those habits that exhibit reflexivity as intelligent habits. Smoking is an example of what Dewey called "bad habits": they feel like they have a hold on us and sometimes make us do things against our conscious decisions. Bad habits are conservative repetitions of past actions, and this can lead to an enslavement to old "ruts" (Dewey 1922, p. 55). Habits hold an intimate power over us because habits make our selfhood-"we are the habit", in Dewey's (1922, p. 24) words. However, habits need not be deprived of thought and reasonableness. So-called intelligent habits, in which conscious reflection and guidance play a part, were Dewey's ideal state of affairs. Dewey (ibid.,p. 67) thought that what makes habits reasonable is mastering the current conditions of action and not letting old habits blindly dominate. There is thus no inherent opposition between reason and habits per se but between routine-like, unintelligent habits and intelligent habits, which are open to criticism and inquiry (ibid., p. 77).
Many forms of health-related behaviour can be characterised in Dewey's terms as unintelligent habits. We stick to many habits and rarely reflect on them in our daily lives. However, that there are intelligent and unintelligent habits does not necessarily imply that all healthy habits would be intelligent in the sense of being open to reflection. Further, the unhealthiness of a habit does not in itself make a habit unintelligent in the sense of being an unconscious routine. Rather, all habits are intelligent in that they have an intrinsic relationship with the action environment. They help the actor to operate in a given environment in a functional and meaningful way. For example, smoking can be seen as meaningful in many hierarchical blue-collar work environments, where the way in which work is organised determines, to a great extent, workers' ability to have control over their working conditions. Smoking can be used as a means to widen the scope of personal autonomy because in many workplaces a cigarette break is considered a legitimate time-out from work (Katainen 2012). Smoking can thus be seen as a solution to a "problem" emerging in a particular environment of action, the lack of personal autonomy. In this sense, it is an intelligent habit that enables workers' to negotiate the extent of autonomy they have and to modify their working conditions (ibid.). As shared practices, cigarette breaks motivate workers to continue smoking and recruit new smokers, but when smoking becomes a routine, reinforced by nicotine addiction, it does not need to be consciously motivated (see also Sulkunen 2015). In the context of highly routinized moments of daily smoking, reflection on the habit and its adverse consequences to health is often lacking (Katainen 2012). This means that the habit in question is not fully intelligent in Dewey's terms.
The mechanisms of adopting so-called bad habits can be very similar to adopting any kind of habit if we understand habits as enabling a meaningful relationship with the environments and conditions in which they were formed. This idea also helps us rethink the socioeconomic patterning of health-related lifestyles. We do not have to assume that people in lower socioeconomic positions always passively become vehicles of bad habits due to limited life chances. The pragmatist view on habit presupposes an actor who has an active, meaningful relationship with the environment, that is, an actor with a capacity for agency, as our illustration of habits as a way to increase worker autonomy shows. Unlike practice theory or Bourdieu's concept of habitus, the pragmatist concept of habit explains habitual action as a solution to practical problems in daily life. For pragmatists, action is always ongoing, and those activities that work and yield positive results in a given context have the potential to become habitual. We thus use habits to actively solve problems in our living environments, adapt to the fluctuating conditions we live in, and also modify these conditions with our habits.
---
Habits, doubt and change
So far, we have discussed habits as a relationship between the actor and the environment of action. We already hinted at the pragmatist idea that habits can be reflexive, and we now move on to discuss in more detail how and why habits change. According to Shove et al. (2012), practices are formed and cease to exist when links between materials, competences and meanings are established and dissolved. Additionally, practice theorists have suggested that practices may change when they are moved to a different environment or when new technologies and tools are introduced (Warde 2005). Actors may learn new things and perform practices in varying ways as performances are rarely identical (Shove 2012). However, it is insufficient to assert that practice theory assumes an active agent with transformative capacity if the underlying view of agency is passive and practices are the ones with agency to recruit actors. Furthermore, the question remains as to when actors are capable of being transformative and when they are confined to the repetition of practical performances.
The pragmatist understanding of how habits change, and when and how actors exercise their agency, originates in Charles Peirce's thought. Peirce (1877) argued that we strive to build habits of action and often actively avoid situations that place our habits in doubt because doubt is an uncomfortable feeling. However, habits are nevertheless subject to contingencies and unforeseeable circumstances. Doubt cannot thus be avoided and it manifests itself in the crises of our habits that take place in concrete action situations and processes.
How should one then go about changing habits? This is a central question in all health sociological theory and has significant practical implications. Dewey (1922, p. 20) was a forerunner of many modern views in that he saw that habits rarely change directly by, for example, simply telling people what they should do. This presupposition is well acknowledged in critical health research, which has repeatedly pointed out that there is a gap between guidelines of healthy living and people's life worlds (e.g., Lindsay 2010). It is usually a better idea to approach habit change indirectly by modifying the conditions in which habits occur. In the case of unwanted habits, conditions "have been formed for producing a bad result, and the bad result will occur as long as those conditions exist" (Dewey 1922, p. 29). Dewey's emphasis on the role of conditions is well reflected in modern public health promotion, which rely on population-level measures and interventions. Yet, Dewey's notion of the conditions of habits goes beyond macro-level measures, such as taxation, restrictions and creating health promoting living environments, to cover more detailed aspects of our daily life. According to Dewey, changing the conditions can be done by focusing on "the objects which engage attention and which influence the fulfilment of desires" (ibid., p. 20). Assuming that simply telling someone what they should do will bring about a desired course of action amounts to a superstition because it bypasses the needed means of action, that is, habits (ibid., pp. 27-28).
Interestingly, Dewey's ideas of behaviour change have many similarities with the approach known as nudging, as both want to modify environmental cues to enable desired behavioural outcomes (Vlaev et al. 2016). According to both of these approaches, behavioural change is often best achieved by focusing on the preconscious level of habitual processes rather than appealing to the conscious mind by informing people of the potential risks associated with, for example, their dietary habits. Despite these similarities, the pragmatist view of habit change cannot be reduced to the idea of modifying people's "choice architectures". As Pedwell (2017) has pointed out, advocates of the nudging approach fail to sufficiently analyse how habits are formed in the first place and how they change once nudged. In the nudge theory, habits are analogous to non-reflexive routines, and the change in habitual behaviour occurs due to a change in the immediate environment of action. As a result, nudge advocates conceptualize the environment through a narrower lens than pragmatists and they are less concerned about how broader social, cultural, and political structures influence and shape everyday behaviour (ibid.).
According to pragmatists, changing habits is something that we do on a daily basis, at least to some extent. This does not mean that we would ever completely overhaul our habits. Dewey (1922, p. 38) thought that character consists of the interpenetration of habits, and therefore a continuous modification of habits by other habits is constantly taking place. In addition, habits incorporate some parts of the environments of action, but they can never incorporate all aspects of the contexts of action. What intelligence-or cognition in modern parlance-in general does is that it observes the consequences of action and adjusts habits accordingly. Because habits never incorporate all aspects of the environment of action, there will always be unexpected potential for change when habits are exercised in a different environment (even if just slightly) than the one in which they were formed (ibid., p. 51).
Different or changed contexts of action imply the potential to block the overt manifestation of habits. For example, if workplace smoking policies are changed so that smokers are not allowed to smoke inside, the habit of smoking needs to be reflected upon and the practice of workplace smoking modified. If the employer simultaneously provides aid for quitting smoking, or even better, creates conditions for work which would support workers' experience of agency and autonomy, some may consider breaking the addiction, at least if colleagues are motivated to do the same thing. Such contextual changes lead to moments of doubt in habit manifestation and thus compel us to reflect on behaviour and, in some cases, to come up with seeds for new habits. The habit of smoking can be seen as a way of dealing with "moments of doubt". It is a solution to certain problems of action in a given environment, as in the previous example of workplace smoking and autonomy. If the original context for which the habit was a "solution" to changes, it becomes easier to change the habit as well.
Pragmatist thinking thus suggests that here lies one of the keys to reducing unhealthy behaviours. By modifying the environments of habits, it is possible to create moments of doubt that give ground to the formation of new habits. Contrary to nudge theorists, however, pragmatists are not only concerned with promoting change in individual behaviours and its immediate action environment but also in the sociocultural contexts of habit formation by enabling people to create new meaningful capacities and skills (Pedwell 2017). The pragmatists also considered the consequences of moments of doubt on habits. Dewey (ibid.,p. 55) argued that habits do not cease to exist in moments of doubt but rather continue to operate as desireful thought. The problem with "bad habits" is that a desire to act in accordance with the habit may lead to solving situations of doubt by changing the environment so as to be able to fulfil the habit rather than changing the bad habit. For example, new smoking regulations intended to decrease smoking may not lead to an actual decrease but rather to a search for ways to circumvent the regulation by smokers.
A crisis of a particular habit thus need not always result in changes in behaviour, as the disposition does not change overnight and may lead to looking for ways to actively change the environment of action back to what it used to be. Furthermore, the crisis (i.e., situation of doubt) may simply be left unresolved. This is what often happens when people are exposed to knowledge of the adverse consequences of their behaviour. There might be a nagging sense that one really should not behave the way one does, but as long as the environmental cues are in place, the habit is not modified, especially if one's social surroundings reinforce the old habit (e.g., other people also continue smoking at the workplace). It can also happen that one makes minor changes in behaviour, for example, by cutting down instead of quitting smokingwhich can in time lead to falling back on earlier smoking patterns. New workplace smoking policies, therefore, often mean that the practice of smoking is modified, and the smokers adopt new places and times for smoking. While old habits often die hard, discordances between habits and their environments can nevertheless trigger reflection and thus have a potential for change.
---
Discussion
We have argued that the pragmatist understanding of habits is an often-overlooked forerunner of many modern theories of health behaviour. While the health lifestyle theory helps to analyse the factors by which health lifestyles are patterned and points out that both contexts of action and individual choices are important in lifestyle formation, it is less helpful in empirical analyses on the mechanisms by which particular patterns of behaviour emerge in the interplay between choices and chances. The social practices approach further elaborates the relationship between choices and life chances by turning attention away from the structure-agency distinction towards enactments of everyday life and on how people go about their lives by carrying social practices. However, the social practices approach runs the risk that individual action becomes a mere enactment of practices. Thus, the practices are the true agents and people become mere carriers of practices. In this context, the pragmatist notion of habits can be useful in grounding practices within the clusters of habits that people have, thereby enabling them to be recruited by specific practices.
To conclude the paper, we want to stress some of the key pragmatist insights into the theorization of health lifestyles and practices. First, unlike practice theories, pragmatism takes individual actors and their capacity to meaning making and reflexivity as a premise for understanding how habits are formed and maintained. Thus, from the actor's point of view, habits, even "bad" habits, should be understood as functional and meaningful ways of operating in everyday circumstances. Habits are creative solutions to problems confronted in everyday life and reflect individuals' relationships to the material and social world around them. Action that proves useful and meaningful in a particular context is likely to become habitual. In the context of health inequalities, risky health-related habits can often be seen as a way to strive for agency in circumstances that provide little means for expressing personal autonomy. We suggest that this insight should be at the core of designing any public health or behavioural change interventions tackling health inequalities.
Second, pragmatism suggests that habits should be understood as dispositions; people are recruited by practices only when their dispositions enable this to happen. Often a lot of habituation is required before the predispositions are in place that make recruitment possible. Third, pragmatism provides tools to analyse how moments of doubt enter habitual flows of action. Doubting habits is an inherent part of our action process, but habits are called into question especially by changes in the environments of action that make particular habits problematic. This, then, can lead to the development of new or modified habits as a response to the "crisis" of action.
If the social and material environment of action, to which the habit is a response, stays more or less the same, the habit will be difficult to change.
The pragmatist conception of habits, while emphasizing agency and reflexivity, does not ignore the significance of materiality and routines in daily conduct but is able to incorporate these elements of action in a way that benefits empirical analyses of everyday practices. Pragmatism thus suggests a variety of research settings to investigate the mechanisms by which health-related habits are formed. Here, we provide a few examples. On a macro level, it is important to observe how organisational, technological, or legislative changes are manifested in different contexts and how they modify and enable habitual action in different social groups and settings. Structural measures to promote public health are likely to invoke varying effects depending on the contexts of action of different population groups. Although the physical environment may be the same, the environment of action is not the same for everyone. In pragmatist terms, new policies can be understood as modifications of action environments, which potentially create moments of doubt in habitual action. For example, there is considerable evidence that smoke-free workplace policies reduce workers' smoking (Fichtberg and Glantz Stanton 2002), but more research is needed to determine how different socioeconomic groups are affected by these policies. Macro-level policy changes create an excellent opportunity to study how policies give rise to new patterns of health-related behaviour, how policies are implemented in different contexts, and how reactions to policies and their effects vary depending on socioeconomic circumstances.
A micro-level analysis of health-related behaviour, on the other hand, could focus on the triggers of the immediate environment of action-material, social or cognitive-to examine how habits are formed as practical and creative solutions to specific problems and what kinds of factors create situations of doubt and thus include the potential for habit change. Research should analyse how moments of doubt regarding health-related habits emerge in differing socioeconomic contexts, as well as why unhealthy habits can and often do become deeply routinized and resistant to change. Furthermore, it is essential to find out the problems in relation to which particular habits of action have been formed. In both micro-and macro-level analytical approaches, people's reflexive capacity and the pursuit of a meaningful and functioning relationship with their environments should be at the core of analysis.
Methodologically, we suggest that the pragmatist approach to health behaviour research calls for methods that integrate the observation of action and people's accounts of and reasoning about their conduct. Ethnography is one research method suited to this task. With participant observation, it is possible to access lived experiences in local settings through which larger policies affect health (Hansen et al. 2013;Lutz 2020) and hard-to-reach population groups (Panter-Brick and Eggerman 2018). So far, ethnographic studies have been rare in health inequality research (e.g., Lutfey and Freese 2005). One way to proceed is provided by Tavory and Timmermans (2013), who have suggested pragmatism as a theoretical-methodological basis for constructing causal claims in ethnography. They propose that a useful starting point for observation could be the process of meaning making: how individuals creatively navigate their conduct when confronting moments of doubt and how they make sense and respond to them in more or less habitual ways. However, surveys can also be used in creative ways to investigate people's habits, for example, using mobile apps that ask and/or track what people are doing. Other methods besides ethnography are thus needed to test the causal claims made by ethnographers.
Lastly, research is needed on how educational systems predispose people to develop reflective habits. One possible explanation for why knowledge about the adverse consequences of health-related behaviour is correlated with people's socioeconomic status, and especially their level of education, is that a higher level of education makes one more sensitive to knowledge-related cues for behaviour. This is because higher educational levels tend to bring about the habit of reflecting on the basis of new knowledge. Education is intimately related with a habit of thinking of things in more abstract terms-distancing oneself from the specifics of particular situations and moving towards more abstract thinking. A high level of education also means the absorption of new knowledge has become habitual. Unfortunately, there are no shortcuts to developing such capacity. This is one of the reasons for why merely providing information on health-related issues will affect different population groups differently.
---
Data availability
Not applicable as no data was used in the article.
---
Declarations
---
Conflict of Interest
The authors have no conflicts of interest to declare.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/ licenses/by/4.0/.
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Anu Katainen is a Senior Lecturer in Sociology at the Faculty of Social Sciences, University of Helsinki, Finland. Her research comprises of projects investigating alcohol policy and drinking cultures, as well as social and health inequalities, with a focus on comparative qualitative sociology.
Antti Gronow is a Senior Researcher at the Faculty of Social Sciences, University of Helsinki, Finland. His research interests include climate policy, advocacy coalitions, social network analysis, and pragmatist social theory. | 46,285 | 1,070 |
9d5a3b6ed5fd3840d6e576e98b1c55ea08a701cd | Health Risk Behaviour among Adolescents Living with HIV in Sub-Saharan Africa: A Systematic Review and Meta-Analysis | 2,018 | [
"Review",
"JournalArticle"
] | The burden of health risk behaviour (HRB) among adolescents living with HIV (ALWHIV) in sub-Saharan Africa (SSA) is currently unknown. A systematic search for publications on HRB among ALWHIV in SSA was conducted in PubMed, Embase, PsycINFO, and Applied Social Sciences Index and Abstracts databases. Results were summarized following PRISMA guidelines for systematic reviews and meta-analyses. Heterogeneity was assessed by the DerSimonian and Laird method and the pooled estimates were computed. Prevalence of current condom nonuse behaviour was at 59.8% (95% CI: 47.9-71.3%), risky sexual partnerships at 32.9% (95% CI: 15.4-53.2%), transactional sex at 20.1% (95% CI: 9.2-33.8%), and the experience of sexual violence at 21.4% (95% CI: 16.3-27.0%) among ALWHIV. From this meta-analysis, we did not find statistically significant differences in pooled estimates of HRB prevalence between ALWHIV and HIV uninfected adolescents. However, there was mixed evidence on the occurrence of alcohol and drug use behaviour. Overall, we found that research on HRB among ALWHIV tends to focus on behaviour specific to sexual risk. With such a high burden of HRB for the individuals as well as society, these findings highlight an unmet need for age-appropriate interventions to address the behavioural needs of these adolescents. | Introduction
Health risk behaviour (HRB) is a major concern in the prevention and management of HIV [1]. Such behaviour is often initiated or reinforced during adolescence [2]. The main forms of HRB include sexual behaviour contributing to unintended pregnancy and sexually transmitted diseases, alcohol, tobacco and drug use, unhealthy dietary habits, inadequate physical activity, and behaviour that contributes to unintentional injury or violence [3,4]. Increased propensity for risk taking is a common phenomenon during adolescence [5]; adolescents living with HIV are vulnerable [6,7]. They encounter various adverse impacts following their engagement in HRBs.
A number of studies conducted among sexually active adolescents living with HIV report that about a half have early sexual debut and unprotected sexual intercourse [6,[8][9][10]. Other studies have reported that adolescents living with HIV (ALWHIV) engage in various HRBs such as transactional sex, that is, sexual intercourse in exchange for material benefit or status [11,12], alcohol abuse, and drug use [8,[13][14][15]. This is problematic for persons living with HIV, because such behaviour underlies suboptimal health outcomes such as poor adherence to antiretroviral treatment [16][17][18], HIV coinfection [19,20], injury, and mortality [21]. Furthermore, this behaviour adversely impacts the socioeconomic welfare of affected families [22].
The occurrence of HRB among ALWHIV is of major public health significance in sub-Saharan Africa (SSA) where there was an estimated 1.2 million ALWHIV aged 15-19 years and 3.2 million HIV infected children below 15 years in 2014 [2]. The vulnerability to HRB and its consequences among the ALWHIV in SSA is exacerbated by the social environmental factors surrounding the HIV epidemic in this region. Among such factors are household poverty, orphanhood, gender inequality, stigma, cultural practices, and poor accessibility to social or health services [23][24][25][26][27]. Besides these factors, growing evidence suggests that underlying physiological conditions such as HIV associated neurodevelopmental deficits [28], anxiety, and depression [8,10] increase susceptibility to risk taking among young people living with HIV.
In response to the enormous burden of HIV in SSA, some research and intervention programs have been conducted over the past few decades. Unfortunately such efforts have not addressed the needs of adolescents [38] although Africa is home to 19% of the global youth population [39]. Key among the research gaps is the scarcity of literature on HRB among adolescents living with HIV in SSA. Specifically, there is a dearth of knowledge regarding which forms of HRB have so far been assessed, characteristics of the ALWHIV (e.g., routes of HIV transmission), where such studies have been conducted in SSA and the general burden of HRB among the ALWHIV. The lack of such research is further compounded by combining the adolescent age group with other age categories [40] and the assessment of HRBs in isolation [41]. Upon this backdrop, this systematic review and meta-analysis aims at ascertaining the amount of research on HRB and documenting the general burden of HRB among adolescents living with HIV in SSA. The specific objectives are as follows:
(i) To identify and summarize characteristics of studies that quantify HRB among ALWHIV in SSA (ii) To summarize the major forms of HRB assessed among ALWHIV in SSA (iii) To compare the burden of HRB among ALWHIV and HIV uninfected adolescents among the eligible studies from SSA. We choose studies based upon the PICOS approach (participants, intervention, comparison, outcome, and study design) [42]. Studies were eligible if they (i) were empirical studies published in a peer-reviewed journal and conducted within SSA; (ii) involved ALWHIV whose age range, mean, or median age fell within 10-19 years; and (iii) quantified any form of HRB among the ALWHIV. We excluded studies that (i) were published in languages other than English and (ii) those that did not aggregate HRB by HIV status of the participants.
---
Methods
Two authors (DS and PNM) independently screened the titles, abstracts, and full articles for eligibility and reached consensus.
---
Data
Extraction. We used one data extraction sheet to extract general study characteristics of the eligible studies. These characteristics included (i) author and year of publication; (ii) country where the study was done; (iii) year the study was done; (iv) study design; (v) population description; (vi) number of ALWHIV and HIV uninfected adolescents; (vii) route of HIV transmission; and (viii) form of HRB quantified.
Then, using two separate data extraction forms, we extracted the (i) author and year of publication; and (ii) data on each specific HRB. From each study HRB data for ALWHIV was extracted. However for the HIV uninfected adolescents this data was only extracted if it had been assessed as well among the ALWHIV. One form was used to extract data used in meta-analysis and the other for data that was to be narratively summarized. Data abstraction was conducted by two authors (DS and PNM) independently who then compared their results and reached consensus.
Our main outcome of interest for this systematic review and meta-analysis was the prevalence of specific HRBs among ALWHIV and HIV uninfected adolescents. For studies that were exclusively conducted among ALWHIV, we computed or extracted the reported percentages of those that engaged in a specific HRB. For those that mixed HIV infection groups and/or had additional age categories besides 10-19 years, we computed percentages of those that took part in a specified HRB for each HIV group within the 10-19 years age group. For those studies where it was impossible to compute these percentages, the occurrence of HRB was reported in its original effect measure, for example, odds ratio, median, or mean.
For each of the eligible studies, an assessment of the risk of bias across the studies was aided by the quality assessment tool for systematic reviews of observational studies (QATSO) [43]. The QATSO was designed for studies related to HIV prevalence or risky behaviour among men who have sex with men. It utilizes 5 parameters to obtain a total score that rates the overall quality of an observational study as either bad (0-33%), satisfactory (33-66%), or good (67-100%). These parameters include representativeness of sampling method used, objectivity of HIV measurement, report of participant response rate, control for confounding factors (in case of prediction or association studies), and privacy/sensitivity considerations. Each parameter is scored "1" if the condition was fulfilled and "0" if it was not.
---
Statistical Analysis.
Data was synthesized both quantitatively and narratively. We assessed the variation in effect size attributable to heterogeneity using the 𝐼 2 statistic of the DerSimonian and Laird method. Using random effects model, the pooled estimate was computed after Freeman-Tukey Double Arcsine transformation [44]. We compared the confidence intervals of the pooled estimates of the forms of HRB for the ALWHIV and HIV uninfected adolescents to determine if there were statistically significant differences. The statistical analyses were performed using STATA software (Stata Corporation, College Station, TX, 2005). We report the pooled estimates for four specific forms of HRB. These include the following: (i) Current condom nonuse behaviour (including any reported episode of sexual intercourse without a condom for any duration that includes the current period, e.g., the last 3 months or last 6 months) (ii) Risky sexual partnerships (including reports of having 2 or more sexual partners currently or in the past 12 months or any form of multiple sexual partnerships)
(iii) Sexual violence (including any reported episode (experienced or perpetrated) of forced sex, nonconsensual sex, or rape) (iv) Transactional sex (including any reported exchange of gifts or money for sex).
We narratively summarized the results that could not be quantitatively pooled (e.g., poor hygiene behaviour and alcohol and drug use behaviour) by describing the effect estimates such as percentages, odds ratios, mean with their standard deviations, and median with their interquartile ranges whatever reported in the study.
---
Results
We identified 1,691 published study citations from the 4 databases and an additional 2 articles [30,37] through snowballing. Of these, 220 were duplicates. We therefore screened 1,473 abstracts for initial eligibility, out of which 269 articles were identified. Full articles were obtained for these citations, of which 14 satisfied the eligibility criteria (Figure 1). The eligible studies were conducted between 1990 and 2012 among 6 sub-Saharan African countries of Nigeria, Rwanda, South Africa, Tanzania, Uganda, and Zimbabwe. The majority of the studies emanated from South Africa (𝑛 = 6) and Uganda (𝑛 = 4) of a total of 14. Most studies had a cross-sectional design, in addition two that utilized baseline data from a randomized control trial [13,33] and another that used baseline data from a cohort study [37]. Samples of the ALWHIV per study ranged from 26 to 3,992 while those for HIV uninfected adolescents were from 296 to 6,600. Four studies [6,14,31,35] had ALWHIV recruited from a clinical setting while the rest had their ALWHIV recruited from a general population setting through household surveys and community samples. Only three studies [6,14,31] described the route of HIV transmission among their participants. In these studies the majority (61-100%) had been perinatally infected (Table 1).
All the 14 eligible studies quantified sexual risk behaviour whereas alcohol use was quantified by 42.9%, sexual violence by 50.0%, and drug use by 21.4%. One study [30] assessed genital hygiene practices among male adolescents (Table 1). Among these 5 forms of HRB, sexual risky behaviour was the most variously assessed with specific examples like condom nonuse, transactional sex, sexual violence, dry sex practices (i.e., reducing vaginal lubrication to cause more friction during intercourse), early sexual debut, and multiple sexual partnerships. Details on specific HRB are summarized in Tables 2(a) and 2(b).
3.1. Sexual Risk Behaviour. Condom use behaviour was reported in 11 studies. We pooled results on current condom nonuse behaviour among ALWHIV from 9 studies and for HIV uninfected adolescents from 5 studies.
The pooled prevalence of condom nonuse behaviour among ALWHIV was estimated at 59.8% (95% CI: 47.9-71.3%) while among their HIV uninfected counterparts it was 70.3% (95% CI: 55.5-83.2%) (Figure 2). In contrast, findings from an additional study that was not part of the metaanalysis [30] reported a higher prevalence of condom nonuse at first sex among ALWHIV as compared to HIV uninfected adolescents (Table 2
---
(b)).
Additionally, the pooled prevalence of engagement in any form of risky sexual partnerships among ALWHIV was 32.9% (95% CI: 15.4-53.2%) whereas among HIV uninfected adolescents it was 30.4% (95% CI: 8.4-58.8%) (Figure 3).
Besides, there were four more studies capturing risky sexual partnerships that were not synthesized in our metaanalysis [11,12,34,35] (Table 2(b)). One of them explored the association between HIV status and engagement in multiple sexual partnerships while comparing adolescents to young adults (aged 20-24 years) and found no statistically significant differences [35]. The second found no significant association between HIV status and having 6 or more sex partners in the past year among males who engaged in heterosexual anal sex [34]. The remaining 2 studies documented lifetime sexual partners among the adolescents of which one found that 4.7% of the ALWHIV compared to 1.4% of the HIV uninfected had more than 3 lifetime sexual partners [11] and the other reported a mean of 1.8 lifetime sexual partners among the ALWHIV compared to 0.7 among their HIV uninfected counterparts [12].
Transactional sex was prevalent among 20.1% (95% CI: 9.2-33.8%) of the ALWHIV and 12.7% (95% CI: 4.2-24.7%) of the HIV uninfected ones (Figure 4).
Another study [34] not included in this pooled estimate found no significant association between HIV status and purchasing sex among adolescents that reported heterosexual anal intercourse.
Early sexual debut among the ALWHIV was reported in 5 studies (Table 2(b)). Two of these studies reported that 25.5% [30] and 42.1% [6] of the ALWHIV initiated their first sex at the age of 15 years or less. Furthermore, a study from South Africa [13] and another from Rwanda [14] reported the median age at first sexual encounter as 14.7 (IQR: 12.9-16.2) and 17 (IQR: 15-18) years, respectively. A study among female ALWHIV reported a mean age of 16.4 (S.D: 0.1) years among the ALWHIV and 16.2 (S.D: 0.1) years among HIV uninfected adolescents at first sexual intercourse [11].
Two studies reported a 6.2% prevalence of dry sex practices (i.e., reducing vaginal lubrication to cause more friction during intercourse) among female ALWHIV. In both studies, the prevalence of dry sex practices was lower among the HIV uninfected adolescents [30,33] (Table 2(b)). Another study [31] reported high prevalence of none contraceptive use at either first sex (63%) or during current or previous relationships (48%) among ALWHIV (Table 2(b)).
---
Alcohol and Drug
Use. Six studies quantified alcohol and drug use behaviour (Table 2(b)). All of the 6 studies reported alcohol drinking behaviour of which 3 compared ALWHIV and uninfected adolescents. Among the 3 studies with results for both HIV groups [11,13,33] the ALWHIV recorded higher occurrence of alcohol drinking behaviour (Table 2(b)). Another study reported that 61% of ALWHIV receiving medication from a clinic had drunk alcohol within 6 hours prior to having sex [14]. In another study among males who engaged in heterosexual anal sex, HIV status was not significantly associated with having anal sex under the influence of alcohol [34]. Drug use behaviour was reported by 3 studies. One reported its occurrence among 53.8% of the male ALWHIV compared to 38.1% of their HIV uninfected male counterparts [13]. The same authors in another study [33] reported the occurrence of drug use among 5.0% of the female ALWHIV compared to 6.3% of the HIV uninfected ones. The third study reported drug use among males who had heterosexual anal sex and showed that there was not a significant association between HIV status and heterosexual anal sex under the influence of drugs [34].
Adolescents living with HIV (ALWHIV) Jewkes et al. [33] Gavin et al. [11] Birungi et al. [31] Jewkes et al. [13] Konde-Lule et al. [37] Konde-Lule et al. [37] Bunnell et al. [32] Aboki et al. [29] Study Jaspan et al. [12] Jewkes et al. [33] Gavin et al. [11] Jewkes et al. [13] Jaspan et al. [12] Test et al. [14] HIV uninfected adolescents 66. 22 Subtotal (I 2 = 94.00%, p = 0.00) Subtotal (I 2 = 98.99%, p = 0.00)
Overall (I 2 = 97.89%, p = 0.00)
Heterogeneity between groups: p = 0.463
---
Sexual Violence.
Seven studies captured reports of various forms of sexual violence such as forced sex, tricked sex, nonconsensual sex, and rape. Six of these studies [11,12,14,30,31,33] specifically reported victims' experience of sexual violence while only one study [13] conducted among rural South African males reported perpetrators' experience of sexual violence. The pooled prevalence of any form of sexual violence (i.e., either as a victim or as perpetrator) was 21.4% (95% CI: 16.3-27.0%) among ALWHIV, while that among HIV uninfected adolescents was 15.3% (95% CI: 8.7-23.3%) (Figure 5).
Poor hygiene behaviour was documented in one study from a small mining town in South Africa which reported that 22.5% of the male ALWHIV compared to 11.4% of HIV uninfected males did not wash their genitals at least once a day [30].
Overall, the studies were of high quality with 10 of them rating as good and the remaining 4 [6,13,33,34] as satisfactory. Only 2 of the studies utilized nonprobability sampling [6,31], 6 did not report the participant response rate [6,12,13,[33][34][35], and 3 did not mention how privacy or sensitivity of HIV was considered in the study [13,33,34].
---
Discussion
This review indicates that research on HRB among adolescents living with HIV in SSA is still scanty. Moreover, within SSA, this research emanates from a few countries in eastern and southern Africa. The within region variation possibly represents disparities in HIV burden such that most of this research has so far focused on parts of SSA with higher HIV prevalence, for example, southern Africa. However, since SSA globally accounts for the largest population of ALWHIV [2], there is an urgent need for more research on HRB of this population.
Furthermore, even among the few existing studies, important details such as the route of transmission and the adolescents' awareness of their HIV status are scanty and yet these are potential determinants of behavioural decision making [45]. The participants are also mainly drawn from the general population or clinical setting. However, it is likely that adolescents from certain settings, for example, dwellers of fishing communities and busy transport corridors, would report a disproportionately higher burden of HRB since such settings are associated with high HIV sociobehavioural risk [46,47].
BioMed Research International weight % HIV uninfected adolescents Auvert et al. [30] Auvert et al. [30] Steffenson et al. [36] Study Bakeera-Kitaka et al. [6] Adolescents living with HIV (ALWHIV) Overall (I 2 = 99.53%, p = 0.00)
Heterogeneity between groups: p = 0.871
---
Aboki et al. [29]
Jewkes et al. [33] Jewkes et al. [13] Steffenson et al. [36] Jewkes et al. [33] Jewkes et al. [13] Konde-Lule et al. [37] Konde-Lule et al. [37] Figure 3: Prevalence of risky sexual partnerships among ALWHIV and HIV uninfected adolescents.
Owing to the overlapping confidence intervals of effect estimates, our findings indicate that there is no statistically significant difference in the prevalence of documented forms of HRB across the ALWHIV and HIV uninfected adolescent groups in SSA. This stated that the prevalence of these HRB is high among both groups which stresses a major and so far unmet need for intervention among adolescents. The consequences of HRB in terms of psychosocial burden, injury, morbidity, and mortality are enormous [48]. Moreover, for ALWHIV, these may be exacerbated by their compromised health condition coupled with their increased need for optimizing care and treatment outcomes [17,19,20].
The high occurrence of unprotected sex at both current and first sexual intercourse among these adolescents is a serious concern. This is moreover compounded by concurrent sexual partnerships, transactional sex, and sexual related violence in the form of nonconsensual sex, intimate partner violence, and rape which are comparably high among both the ALWHIV and HIV uninfected adolescents. Similar to results from this review, some cross-sectional studies from the USA have documented a high prevalence of unprotected sex of 65% [10] and 62% [9] among adolescents living with HIV. Another systematic review of studies from SSA also indicates that transactional sex is a significant risk factor for HIV infection especially among young women [49]. Our findings on prevalence of sexual violence are within the ranges reported among adolescent girls from SSA [50]. This burden is similar for both ALWHIV and their uninfected counterparts but most importantly is that this is an unacceptably high burden for both groups. We suggest that high occurrence of risky sexual behaviour, sexual violence, and other forms of potentially high risk sexual practices such as transactional sex among ALWHIV may partly result from their vulnerable background that often is characterized by stigma, psychological vulnerability, family stressors, poverty, and orphanhood [23,51]. Additionally, some underlying physiological pathways such as neurodevelopmental deficits, mental health, and HIV comorbidities possibly elucidate some behavioural trends.
Furthermore, our findings reveal that the use of alcohol and drugs is largely problematic especially among male adolescents in SSA. Similar to our findings, a number of studies from other regions have reported a similar problem of alcohol and drug use including among male adolescents living with HIV [10,52]. The use of alcohol and drugs among people living with HIV is linked to numerous problems like poor adherence outcomes [16], psychiatric comorbidity [53], and HIV infection [54]. More so, drug and alcohol use Subtotal (I 2 = 99.01%, p = 0.00)
Overall (I 2 = 97.97%, p = 0.00)
Heterogeneity between groups: p = 0.329 Jewkes et al. [33] Gavin et al. [11] Jewkes et al. [13] Aboki et al. [29] Jaspan et al. [12] Jewkes et al. [33] Gavin et al. [11] Jewkes et al. [13] Jaspan et al. [12] Test et al. [14] Auvert et al. [30] Auvert et al. [30] Figure 4: Prevalence of transactional sex among ALWHIV and HIV uninfected adolescents.
may form a niche for impulsivity and aggravated risk taking such as intimate partner violence, rape, and unprotected sex, among others [55,56].
Our results highlight much needed efforts of increasing research on HRB among ALWHIV in SSA, broadening the scope of HRBs currently being explored and including adolescents from most at-risk settings among such studies. Additionally, it is necessary to target ALWHIV with pragmatic interventions that address their specific needs so as to prevent or reduce their engagement in HRBs. These interventions also need to foster safe and healthy environments in which adolescents do not fall victim to HRBs and forms of sexual injustices such as sexual violence and transactional sex.
One of the limitations of our review is that HRB is selfreported among all the eligible studies and this may have involved some degree of social desirability bias. This form of bias generally arises when respondents answer questions in a way that favours their impression management [57]. However, assessment of HRB is predominantly conducted through self-reports. Additionally, our research focus was limited to studies conducted in SSA and thus generalizability of our results to the entire African and other geographical contexts should only be done with caution.
---
Conclusion
Research on HRB among adolescents living with HIV in SSA is still limited and currently focuses on a few forms of HRB especially behaviour specific to sexual risk. Nonetheless, the existing research from this region reveals an appalling burden, especially of sexual violence (where in most cases the adolescents are victims), sexual risk behaviour, and substance or drug use. While HRB is noted to compromise health outcomes, the studies do not report a number of factors such as route of HIV transmission and awareness of HIV status which could enhance our understanding of the context of HRB in this patient group. Furthermore, the assessment of HRB is not uniform pointing to the need for utilization of standardized assessment tools that would ensure better comparability of findings across studies. Nonetheless, the current review provides important insights into future research in the field of health risk behaviour and highlights the urgent need for age-appropriate interventions that will effectively address the behavioural and health needs of adolescents living with HIV in SSA. The ALWHIV themselves do not engage less in HRB than HIV uninfected adolescents. We suggest that further research is needed to explore in depth the forms of HRB and their predisposing and protective factors among ALWHIV
---
BioMed Research International
(14.93, 31.51)
15.33 (8.73, 23.34 ) 33.33 (9.92, 65.11) 15.63 (10.37, 22.20) 16.07 (14. Gavin et al. [11] Birungi et al. [31] Jewkes et al. [13] Jaspan et al. [12] Jewkes et al. [33] Gavin et al. [11] Jewkes et al. [13] Jaspan et al. [12] Test et al. [14] Auvert et al. [30] Auvert et al. [30] Figure 5: Prevalence of sexual violence behaviour among ALWHIV and HIV uninfected adolescents.
and HIV uninfected adolescents within the SSA context. Such research may be crucial in guiding intervention planning for HRB and ensuring that the interventions are responsive to special needs and challenges faced by specific adolescent groups like ALWHIV, for example, stigma, depression, and orphanhood [16,17,23,24].
---
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
---
Stem Cells International
| 24,686 | 1,320 |
f8ca0d5117044725fcd4dda7aa8e720a7802d286 | Influence of culture on disease perception | 2,023 | [
"JournalArticle"
] | This scientific paper explores the complex relationship between culture, health, and disease, highlighting how cultural beliefs and practices shape perceptions of health and illness. Culture is described as a complex system of knowledge and customs transmitted from generation to generation, encompassing language, customs, and values. The paper emphasizes that concepts of health and disease can vary significantly across cultures. Different cultural backgrounds lead to diverse interpretations of what constitutes health or illness. Cultural beliefs influence how individuals perceive their health and respond to medical interventions. The text examines the example of Traditional Chinese Medicine (TCM), which differs from Western medicine by focusing on restoring balance and harmonizing energies within the body. The contrast between these two medical paradigms highlights the impact of culture on healthcare approaches. The paper also discusses the cultural acceptance of practices that may be harmful to health, such as incest in certain societies. These practices are considered sacred customs within those cultures, reflecting how cultural ideologies can shape disease risks. Furthermore, the paper explores how cultural factors interact with political and economic forces to create specific health risks and behaviors within societies. It emphasizes that culture plays a pivotal role in shaping human behavior and social acceptance. The paper concludes by emphasizing the enduring influence of culture on perceptions of health and disease throughout history, highlighting how cultural beliefs and practices continue to impact individuals' health experiences and outcomes. | INTRODUCTION
It seems appropriate to begin this paper with this question, as it explicitly cuts across the perceptions of different cultures worldwide. (1) The determinants of health are centered on lifestyle-based characteristics that are influenced by a wide range of social, economic and political forces that influence the quality of an individual's health. (2) Among the characteristics of this group are, at a distal level, cultural determinants that are essential to address and understand the processes of health and disease in society. (3,4) Although there is no concrete definition of cultural determinants, it is advisable first to define the concept of culture when approaching its construction. (5) Culture is a complex system of knowledge and customs that characterize a given population. (6) It is transmitted from generation to generation, where language, customs and values are part of the culture.
We also take the WHO definition of disease, which defines it as "Alteration or deviation of the physiological state in one or more parts of the body, due to generally known causes, manifested by symptoms and characteristic signs, and whose evolution is more or less foreseeable". (7) Now, let us think of an interrelation between culture and disease, which we will understand as the interpretation of health and disease and what it means to be healthy and sick. (8)
---
DEVELOPMENT
---
Concepts of health and disease may differ from one culture to another
There are thousands of cultures worldwide, all with their social determinants, governed by laws, traditions and customs that give them cultural characterization. Thus, it should also be thought that culture acts as a conjunction of traditional heritage, in which there is a perception of things universal in nature, be it life, death, the past, the future, health and disease. The cultural approach to all these aspects is a form of doctrinal influence, "Culture is learned, shared and standardized", which means that it can be learned and replicated individually within the cultural plane. (9) Focusing on the subject in question, the disease and depending on the cultural axis, it will be a positive aspect and, in other cases, harmful. Let us analyze this:
We must think of illness as a cultural construct; then, the perception of illness refers to the cognitive concepts that patients/illnesses construct about their illness. Here, the importance of cultural beliefs about calmness after a negative medical test, satisfaction after a medical consultation and patients' perceptions of illness about the future use of relevant services play a fundamental role. In this line of thinking, illness perceptions influence how an individual copes with that situation (such as receiving treatment) and emotional responses to illness. In many cultures, we may skip hospital treatments. (10) Traditional Chinese Medicine (TCM) is over 2,000 years old. It is based on Taoism and aims to restore the balance between the organism and the universe, known as yin and yang, promoting a holistic approach. This is based on the presence of Qi, and as everything is energy in different patterns of organization and condensation, humans have spiritual, emotional and physical aspects. While its treatments focus on harnessing and harmonizing imbalanced energies and maintaining or restoring the individual's homeostatic processes to prevent disease outbreaks, the Western paradigm focuses primarily on treatment. For this reason, traditional Chinese medicine, which has proven to be safe, effective and with few side effects, is gaining increasing importance today. (11) Given the modern, global concept of prevention and how every healthcare system is designed, one might think that a greater focus on TCM in planning would contribute significantly to its impact on individuals, but no. Of course, this requires training and education of healthcare professionals in the basics of TCM, and a series of adjustments to the system will require further development that will take years. (12) Now, let us think about incest. As everyone knows, this is listed as an act that brings health problems to future children who can suffer from all kinds of diseases. However, there are countries where this is allowed, not because of their respective cultures but because different cultures living together have different perceptions of health or disease. For example, Sweden is one of the countries that allows marriage between half-siblings who share the same parent. However, they must obtain special permission from the government to do so. In contrast, in some North American cultures, such relationships are prohibited and punishable by imprisonment. Those who commit these crimes could be sentenced to up to 10 years in prison if convicted. (13) Community and Interculturality in Dialogue. 2024; 3:94 2 Dr. Debra Lieberman, an expert in the field at the University of Miami, says that reproducing with a family member has a greater chance of acquiring two copies of a harmful gene than if you reproduce with someone outside the family. The closer the genetic relationships between procreating couples, the more likely it is that harmful genes and pathogens will affect their offspring, causing premature death, congenital malformations and disease. (14) Cultural ideologies cause these diseases. We take incest as an example, but thousands of cultures perform practices that are harmful to health. However, within that culture, they qualify as sacred customs and initiation.
---
Disease, health and their cultural bases
Disease and health are two concepts inherent to every culture. A deeper understanding of the prevalence and distribution of health and disease in society requires a comprehensive approach that combines biological and medical knowledge of health and disease and sociological and anthropological issues. From an anthropological perspective, health is linked to political and economic factors that guide human relationships, shape social behavior and influence collective experience. (15) Traditional Western medicine has always assumed that health is synonymous with the absence of disease. (16) From a public health point of view, this means influencing the causes of health problems and preventing them through healthy and wholesome behavior. From medical anthropology to understanding disease, this ecocultural approach emphasizes that the environment and health risks are mainly created by culture. (17) Culture determines the socio-epidemiological distribution of diseases in two ways:
• From a local perspective, culture shapes people's behavior and makes them more susceptible to certain diseases. • From a global perspective, political and economic forces and cultural practices cause people to behave towards the environment in specific ways. (18) Our daily activities are culturally determined, which causes culture to shape our behavior by homogenizing social behavior. People behave based on a particular health culture, sharing sound fundamental principles that enable them to integrate into close-knit social systems. Social acceptance involves respecting these principles and making them clear to others. (19)
---
Health in ancient Egypt
The Egyptians believed death was only a temporary interruption of life and that human beings were privileged to live forever.
The people who dwelt on the banks of the Nile River were born of a complex interplay between spiritual and tangible energies. However, they understood their earthly life as if they were fleeting reflections of the specter that would become their eternal life. (20) The human body, organs, and instincts corresponded to what they called Keto: a being inserted into the physical world that came to life thanks to Ka, the vital force humans acquire their identity. Therein lies the intimate essence of what Freud called ego. The Ba (superego) of mystical origin was superimposed on this force, which became an ineffective union with the Creator. To this set of forces and substances that formed, the subject was assigned a name corresponding to the auditory expression of his personality. (21) In this shadow realm, sickness and death are inherent conditions of human nature, and health and sickness are mere concentrations of metaphysical dramas arising from external causes. (22) Sickness and death were believed to be caused by mysterious forces mediated by inanimate objects, whether living or evil spirits. They believed that the breath of life entered through the right ear, and the breath of death entered through the left ear. (23) The breath of death disturbed the harmony between man's material and spiritual parts. Between the extremes of life and death, health depended on the harmonious interaction of material and spiritual forces. (24) In contrast, the severity of illness depended on the degree of disturbance of harmony.
---
CONCLUSIONS
In this text, we have tried to describe in a general way and with some examples how, since ancient times, people have explained various phenomena and situations about the concept of health and disease, which has played an essential role in culture and civilization.
From this point of view, in ancient times, illness was the primary punishment for wrongdoing, and only fasting, humiliation and various sacrifices would be used to appease the wrath of the gods. With magical or primitive thinking, there was a relationship between the everyday world and the universe and with the sun, the moon and the supernatural world shaped by other gods and demons, which played an essential role as religious concepts in indigenous communities.
About this, we can determine that both in ancient cultures and in the present, certain diseases are suffered that, due to different ideologies, beliefs or customs, are not transited or experienced in a different way. Beyond the specific cultures of each society, health and disease are determined by individual factors that influence how they are defined, the importance they acquire and the way to act on the symptoms of each disease.
Finally, ending this essay with a quote that reflects the theme we have addressed seems appropriate.
"The distribution of health and disease in human populations reflects where people live, when in History they have lived, the air they breathe, and the air they breathe. The History they have lived, the air they breathe and the water they drink; what and how much they eat and drink; what and how much they eat; and how much they drink. Moreover, how much they eat, their status in the social order, and how they have been socialized. social order and how they have been socialized to respond, identify with or resist that status, who they marry, when and whether or not they are married; whether they live in social isolation and have many friends; the amount and medical the medical care they receive, and whether they are stigmatized when they are when they get sick or if they receive care from their community". | 10,992 | 1,681 |
6744113c6ecb95b8cd04c9ece2d465412f5e8611 | Parent and child co-resident status among an Australian community-based sample of methamphetamine smokers. | 2,020 | [
"JournalArticle"
] | Introduction: Children in families where there is substance misuse are at high risk of being removed from their parents' care. This study describes the characteristics of a community sample of parents who primarily smoke methamphetamine and their child/ren's residential status. Design and methods: Baseline data from a prospective study of methamphetamine smokers ('VMAX'). Participants were recruited via convenience, respondent-driven and snowball sampling. Univariable and multivariable logistic regression analyses were used to estimate associations between parental status; fathers' or mothers' socio-demographic, psychosocial, mental health, alcohol, methamphetamine use dependence, alcohol use and child/ren's co-residential status. Results: Of the 744 participants, 394 (53%) reported being parents. 76% (88% of fathers, 57% of mothers) reported no co-resident children. Compared to parents without co-resident children, fathers and mothers with co-resident children were more likely to have a higher income. Fathers with co-resident children were more likely to be partnered and not have experienced violence in the previous six months. Mothers with co-resident children were less likely to have been homeless recently or to have accessed treatment for methamphetamine use. Discussion: The prevalence of non-co-resident children was much higher than previously reported in studies of parents who use methamphetamine; irrespective of whether in/out of treatment. There is a need for accessible support and services for parents who use methamphetamine; irrespective of their child/ren's co-residency status. Conclusions: Research is needed to determine the longitudinal impact of methamphetamine use on parents' and children's wellbeing and to identify how parents with co-resident children (particularly mothers) can be supported. | Introduction
Children in families where there is substance misuse are at high risk of poor developmental outcomes and being placed in out of home care [1,2]. Most of this research has focused on the impact of parents' alcohol misuse on children [3]. Longitudinal studies have shown that parents'/grandparents' dependency on illicit drugs is positively associated with children's substance use and poor psycho-social outcomes [4]. There is a growing body of evidence about the effect of parents' use of methamphetamine on child outcomes. Prenatal methamphetamine exposure has been associated with children's externalising behavioural problems at 5 years [5]. Parents in treatment for methamphetamine use report their children are at high risk of behavioural problems [6,7].
Parents who use methamphetamine are less likely have co-resident children than parents who use other substances [8,9]. Reports of children aged <18 years co-residing with parents who use methamphetamine vary according to the age and number of children and range from 68% in a community setting to 87.5% for those in treatment [8,10]. In Australia, amongst those in treatment for methamphetamine use, mothers are more likely than fathers to have co-resident children [8].
Crucially, compared to parents who use other substances, those who use methamphetamine are more likely to have attempted suicide, experienced depression, nightmares and flashbacks [8], have high levels of parenting and psychological distress [5,10,11] and have children with behavioural problems [5,10].
No published studies were found that examined the characteristics of Australian parents who primarily smoke methamphetamine and the co-residency status of their children. Two Australian longitudinal studies of consumers who use methamphetamine via any route of administration found being a parent was not independently associated with accessing professional support, reduced methamphetamine use or abstinence [12,13]. Instead, parents' service utilisation was associated with co-morbidity (e.g. mental health) and increased risk of methamphetamine-related harms [12].
Little is known about how to support parents who primarily smoke methamphetamine and are not seeking treatment. To assess the needs and risks in these families, we need to understand their characteristics and living circumstances. The aim of this study was to quantify and describe the socio-demographic, psychosocial, mental health, alcohol and methamphetamine use characteristics of parents, in a cohort of participants who primarily smoke methamphetamine. We specifically examined whether these characteristics differed by parental status, gender or residential status of child/ren.
---
Method
---
Study design and sampling
Data come from baseline surveys administered to a community-based prospective sample of consumers who primarily smoked methamphetamine (the 'VMAX Study'). The cohort was recruited via a combination of convenience, respondent-driven [14] and snowball sampling methods. Eligible participants included those who: were aged >18 years; primarily smoked and used methamphetamine at least monthly in the previous six months; and, lived in metropolitan or rural Victoria. Methamphetamine dependence was assessed using the Severity of Dependence Scale (SDS); a score of >4 is indicative of methamphetamine dependence [15]. The Patient Health Questionnaire (PHQ-9) and the Generalised Anxiety Disorder (GAD-7) instruments were used to measure depression and anxiety [16], and the Alcohol Use Disorders Identification Test-Consumption (AUDIT-C) measured harmful alcohol use [17]. Data were collected via face-to-face interviews and entered directly into a mobile device using REDCap software [18].
---
Statistical analyses
Variables with a significant association (p < 0.05) in univariable analysis were entered into multivariable logistic regression analyses to estimate associations between (1) participants' parental status (no children, children); (2) fathers' or (3) mothers' child/ren's co-residential status (at least 1 co-resident child, no co-resident children) and socio-demographic, psychosocial, mental health, methamphetamine dependence and harmful alcohol use. For the univariable and multivariable analyses, reported results are odds (adjusted) ratios, 95% confidence intervals and probability-value levels. Univariate analyses excluded missing cases for each independent variable. Adjusted multivariable logistic regression analyses used a complete case approach for missing data (n=1). All statistical analyses were undertaken using SPSS statistical software package [19].
---
Ethics
The study was approved by the Alfred Hospital and Monash University Human Research Ethics Committees. Written informed consent was obtained prior to enrolment in the study. Consistent with best practice in alcohol and other drug-related research, participants were reimbursed $40 [20].
---
Results
Of the 744 participants, 394 (53%) were parents. In multivariable Model 1 (Table 1), participants were significantly more likely to be parents if they were older, female, lived outside a major city, identified as Aboriginal/Torres Strait Islander, were in a married/defacto relationship, had a Year 10 education or less, had suffered physical violence in the last six months, or did not have an alcohol use disorder. Of the 394 parents, 297 (76%) had no co-resident children.
Only 12% (28/233) of fathers had at least one co-resident child. In multivariable Model 2, fathers who were in a married/defacto relationship, had a weekly income above $399, and had not experienced violence in the previous six months, were significantly more likely to have at least one co-resident child.
Close to half (43%, 69/160) of mothers had at least one co-resident child. In multivariable Model 3, mothers who had a weekly income above $399, had not been homeless in the last 12 months, and had not utilised professional support for the methamphetamine use in the last 12 months, were significantly more likely to have at least one co-resident child. c) Modified Monash Model geographical classification system of metropolitan, regional, rural and remote areas in Australia (https://www.health.gov.au/health-workforce/health-workforceclassifications/modified-monash-model) d) Missing data for 1 participant ('don't know) e) Missing data for 6 participants (2 'don't know, 4 'not applicable') f) At least one period of homelessness in last 12 months. g) Experienced any kind of physical violence in last 6 months; Missing data for 1 participant (refused) h) SDS -severity of dependence scale where > 4 classified as methamphetamine dependent i) AUDIT-C alcohol screen where males≥4, females≥3 classified as alcohol use disorder j) GAD-7 Generalised Anxiety Disorder scale k) PHQ-9 Patient Health Questionnaire depression scale l) Ever utilised alcohol-and-other-drug services for methamphetamine use: individual/group drug counselling, residential/outpatient detoxification, residential rehabilitation. Excludes pharmacotherapy.
---
Discussion
Our study is one of few to examine characteristics and child/ren residential status of a community-based sample of parents who primarily smoke methamphetamine. Participants who were parents were more likely to report disadvantage and harm. Seventy-six percent of parents (88% of fathers and 57% of mothers) had no co-resident children; that is, all their children lived elsewhere. When compared to the findings of studies where child co-residency is based on one or more non-co-resident children, these results are concerning. An Australian residential treatment data study of child/ren co-residency reported the proportion of parents with at least one child who is not co-resident (i.e. not all children) at 83-88% [8]. Similarly, 68% of parents in a US community-based study reported having at least one non-co-resident child [10].
We found mothers who had non-co-resident children were more likely to access treatment for their methamphetamine than those who resided with children. This is consistent with previous studies of parents who use or access treatment for methamphetamine; they are less likely to have co-resident children [8,9,12]. This finding was the same for both having ever or recently (past year) accessed treatment for methamphetamine use. In light of previous research [21], it could be that mothers who access treatment may, in part, do so to be reunified with their children. Conversely, mothers who have co-resident children may perceive they have less 'need' for services, or be concerned about losing custody of their children if they seek services [21,22]. Compared to other children, those whose parents misuse any substances are at increased risk of poorer academic, behavioural, emotional and social outcomes [2]. However, women who use methamphetamine and access services face the stigma being a mother who uses methamphetamine [23] and little is known about the role of treatment services in preventing child custody loss [24]. Further research is needed to determine how mothers who use methamphetamine and have co-resident children can be supported to seek services whilst ensuring the wellbeing of their child/ren.
In our study, depression and anxiety scores were not significantly different between those with/out children, nor between parents with/out co-resident children, but are very high compared to those reported in the 2017-18 Australian health survey for the general Australian population [25]. This highlights the importance of mental health support and comprehensive primary health care services for parents who use methamphetamine, and for their children.
There were limitations to the study. We did not ascertain the age of children. This may, in part, explain our findings. To account for this, we compared the sample by parent-and child-resident status with estimates from an age-and sex-adjusted representative sample of the Australian population [26] and estimated that more than 90% of parents in our study would have at least one child under the age of 18 years. Parents who use methamphetamine were as likely to have children, but were far less likely to have co-resident children; 12% compared to 75% of the general Australian population of the same age and gender. The data is crosssectional so causality cannot be inferred. The sample was not representative sample; therefore, the generalisability of findings may be limited. Self-reported data are subject to recall and social desirability biases. The number of fathers with co-resident children was relatively small (n=28) and so limited the estimation of smaller but nonetheless clinically meaningful effects.
Follow-up with this prospective cohort will afford opportunities to explore the age, sex and ongoing residential status of participants' children. In the context of parents' substance use, data linkage over a fiveyear period will provide additional insights into parents' service utilisation.
---
Conclusion
Study findings provided new information regarding the high number of non-co-resident children and the need for accessible support and services for parents who use methamphetamine. Further research is needed to identify optimal ways of supporting these families. | 11,262 | 1,839 |
90f8d75c48070323889abae57a3a664bed03fd29 | Freshmen at a University in Appalachia Experience a Higher Rate of Campus than Family Food Insecurity | 2,018 | [
"JournalArticle"
] | Food insecurity means having limited or uncertain access, in socially acceptable ways, to an adequate and safe food supply. Ample evidence has identified college students as vulnerable to this problem, but little research has focused on freshmen. This cross-sectional study examined family and campus food insecurity among freshmen at a university in Appalachia. An online questionnaire contained sociodemographic items and scales that measured food security status, academic progress, coping strategies for accessing food, and social support. T-tests and Chi square analyses compared food insecure and food secure students. Statistical significance was p<.05. Participants were 456 freshmen, 118 males (26%) and 331 females (73%). Family and campus food insecurity were experienced by 32 (7.1%) and 98 (21.5%) of the freshmen, respectively, and 42.5% of those who experienced campus food insecurity believed their food access had worsened since starting college. Family and campus coping strategies, respectively, included stretching food (72.9 vs. 18.4%) and purchasing cheap, processed food (68.8 vs. 16.3%). Food secure students scored significantly higher on selfrated measures of academic progress (p<.01), and greater proportions of food secure students (60.7 vs. 43.9%, p< .01) perceived their eating habits since starting college as "healthy/very healthy," and perceived their health status as "good/ excellent" (86.0 vs. 71.4%, p<.01). Students requested assistance with job opportunities (19.4%), affordable meal plans (18.4%), money management (13.3%), and eating healthy (11.2%). Findings suggest that college student food insecurity begins during the freshmen year, and that there is a need for campus and community-based interventions to increase food access among these freshmen and their families. | Introduction
Food insecurity means having limited or uncertain access, in socially acceptable ways, to an adequate and safe food supply that promotes an active and healthy life for all household members, while hunger refers to the physiological responses of the body to food insecurity [1]. The U.S. Department of Agriculture Economic Research Service (USDAERS) developed the 10-item Adult Food Security Survey Module (AFSSM) and the extended 18-item Household Food Security Survey Module (HHFSSM) to measure the percentage of U.S. adults and households, respectively, that experience food insecurity at some time during a given year [2]. Survey questions focus on the quantity, aff dability, and quality of the available food supply, and are worded such that they distinguishes between high food security (no reported indications of food-access problems or limitations), marginal food security (one or two reported indications, typically of anxiety over food sufficiency or shortage of food), low food security (reduced quality, variety, or desirability of diet, with little or no indication of reduced food intake), and very low food security (multiple indications of disrupted eating patterns and reduced food intake). In 2016 12.3% of U.S. households, accounting for 41.2 million people, were food insecure, of whom 10.8 million were very low food secure in infants, children, and adolescents [5,8] and compromised physical, cognitive, and emotional functionality in persons of all ages [9][10][11][12]. Additionally, epidemiologic data have linked food insecurity among adults to obesity, type 2 diabetes, and the metabolic syndrome, sometimes termed the "hunger-obesity paradox" [13][14][15]. A variety of food assistance programs are available in the U.S. at the federal, state, and community levels to aid persons living with food insecurity [1,16]. Additionally, food insecure individuals use a variety of coping strategies to access food, including: selling personal possessions; saving money on utilities and medications; bartering; holding multiple part-time jobs; planning menus and cutting food coupons; purchasing less expensive, energy-dense foods to eat more and feel full; eating more than usual when food is plentiful; stretching food to make it last longer; selling their blood; dumpster-diving; participating in research studies; and stealing food or money [17][18][19].
Research fi from post-secondary U.S. campuses indicate that college students are among the population groups vulnerable to food insecurity [20], with reported rates ranging from 14.8% at an urban university in Alabama [21] to 59.0% at a rural university in Oregon [22]. Among the correlates associated with college student food insecurity are: lower grade point average [22,23], on-campus residence [24], living off-campus with roommates [25], being employed while in school [22], older age, receiving food assistance, having lower self-efficacy for cooking cost-effective, nutritious meals, having less time to prepare food, having less money to buy food, and identifying with a minority race [21], and having an increased risk for depression, anxiety, and stress [26,27].
Although considerable evidence indicates that college student food insecurity is a public health problem associated with unfavorable health and academic outcomes [20], searches in PubMed, ScienceDirect, and Google Scholar located one peer-reviewed article that studied this problem among freshmen [27]. These authors measured food insecurity among 209 freshmen living in dormitories on a southwestern campus and reported that 32% had experienced inconsistent food access in the previous month and 37% in the previous 3 months. Additionally, these young students had higher odds of depression, and lower odds of consuming breakfast, perceiving their on campus eating habits as healthy, and receiving food from parents. The authors concluded that there is a need for interventions to support food insecure students, given that food deprivation is related to various negative outcomes. Since these findings suggest that Freshmen, like older college students, may be risking their health and academic success because of food insufficiency, more research is needed that assesses the scope of this problem among first year college students and identifies predisposing factors and coping behaviors. Accordingly, the aims of this cross-sectional study were to measure the prevalence of family and campus food insecurity and identify correlates among a nonprobability sample of freshmen attending a university in Appalachia, and to compare food insecure and food secure families and freshmen on correlates. The study site was a university located in western North Carolina that shows high rates of poverty, obesity, and food insecurity [28,29].
---
Methods
---
Participants and Recruitment
A computer-generated randomized sample of all freshmen (n = 2744) enrolled during the spring, 2017 semester were sent electronic recruitment letters, followed by a reminder email 1 and 2 weeks later [30] that included a link to the questionnaire. Interested students clicked on a link that took them to a screen that outlined the elements of informed consent, and those who wished to proceed clicked an "accept" button that took them to the questionnaire. Upon completion, students could click on a link to a screen where they typed their name and email address to enter a drawing for one of two $100.00 gift cards to Amazon.com. This link was detached from the questionnaire link to insure confidentiality of responses. This research was approved by the Offi of Research Protections at the university.
---
Survey Questionnaire
Data were collected using a cross-sectional, anonymous, online questionnaire administered using Qualtrics survey software (Qualtrics, November 22, 2015, Provo, UT). Initial close-ended questions elicited the following types of information: demographic and anthropometric [gender, age, race, family composition, and self-reported weight and height for calculating body mass index (BMI)], economic (employment status, personal monthly income, financial aid status, and meal plan participation), academic [year in school, enrollment status, on or off campus residence, grade point average (GPA), and academic progress]. Their academic progress was assessed using an Academic Progress Scale where the students self-rated their transition to college, overall progress in school including graduating on time, class attendance, attention span in class, and understanding of concepts taught by selecting either "poor," "fair," "good," or "excellent."
Food security status was measured using the 10-item USDA AFSSM, which was completed for the family and campus settings [2]. Next the students responded to a "yes/ no" item asking whether they believed their access to food had worsened since starting college. Those who selected "yes," checked, from the following reasons, those that they believed explained this change: I don't have enough money to buy food, my meal plan card runs out too soon, I often spent money on nonfood items rather than using the money to buy food, I have trouble budgeting my money, and I spend money when I shouldn't because I want to be included in social activities with my friends. Their money spending behaviors were assessed using a Money Expenditure Scale that asked the students to estimate how often they spent money on the following items instead of using the money to buy food by selecting either "never," "sometimes," or "often,": alcohol, cigarettes, recreational drugs, car repairs, gasoline, entertainment, tattoos or piercings, prescription medications, make-up and fashion, and school fees. They also checked, from a scrambled list of 17 positive and negative descriptors, those that best reflected how they felt about their food security status on campus, (e.g., satisfi ashamed, secure, frustrated, etc.). Coping behaviors for accessing food were identified using a Coping Strategies Scale focusing on saving (n = 7 items), social support (n = 8 items), direct access to food (n = 10 items), and selling personal possessions (n = 2 items). This scale was completed once for the family setting and again for the campus setting by checking all of the strategies used at each location.
The students rated their eating habits since starting college by selecting either "very unhealthy," "unhealthy," "healthy," or "very healthy," and they rated their health status by selecting either "poor," "fair," "good," or "excellent." Follow-up questions assessed their meal skipping and food consumption behaviors for the campus location only. Meal skipping was assessed using a Meal Skipping Scale that asked how often the students skipped breakfast, lunch, and dinner with the response options "never," "seldom," "most days," and "always." Food consumption data were collected with questions asking approximately how many days/week, on a scale from 0 (zero) to 7, they consumed fruits/juice, vegetables/juice, fast foods, and sweets.
The final two items concerned sources of social support for accessing food on campus. The students checked, from a list of 13 sources (e.g., parents, campus food pantry, etc.), those that had provided them with food assistance, and checked, from a list of 12 policies and learning activities (e.g., more financial aid from school, learn how to shop for affordable, healthy food, etc.), those they believed would help them improve their access to food. The Coping Strategy Scale was compiled with guidance from the food security literature [17][18][19], while the Academic Progress, Meal Skipping, and Money Expenditure scales were developed by the authors.
Content validity of all items was determined by two nutrition professors with experience in questionnaire construction and familiarity with the food security literature. The questionnaire was pilot tested online with a computergenerated randomized sample of 50 freshmen who did not participate in the fi al study. Student feedback indicated that the links and buttons operated accurately and that the screens displayed an appropriate amount of items. Pilot test data prompted deletion of items from the Coping Strategies Scale and addition to items on the Money Expenditure Scale.
---
Statistical Analyses
Data were analyzed using SPSS version 24 (IBM, SPSS Statistics, 2016). The students' food security status was measured using the USDA/ERS scoring scheme for the 10-item AFSSM, such that zero affirmative answers reflected high, 1-2 marginal, 3-5 low, and 6-10 very low food security. Students who scored 0-2 points were classifi as food secure, and those who scored 3-10 points as food insecure [2]. The single item concerning perceived health status and the five-item Academic Progress Scale were scored by allotting 1 point to the "poor" and 4 points to the "excellent" responses. The Meal Skipping Scale was scored by allotting 1 point to the "never" and 4 points to the "always" responses, and the Money Expenditure Scale was scored by allotting 1 point to the "never" and 3 points to the "often" responses. Descriptive statistics were obtained for sociodemographic and behavioral variables. Correlational analyses measured associations between AFSSM scores and sociodemographic and behavioral variables, and independent samples t-tests and Chi square analyses compared food insecure and food secure students on these variables. Findings concerning coping strategies and sources of social support were reported only for the food insecure students and their families, in accord with the food security literature [17][18][19]. Statistical significance was p < .05.
---
Results
---
Participant Characteristics
Questionnaires were submitted by 494 of the 2000 recruited freshmen, of whom 38 were disqualified due to insufficient data, resulting in a sample of 456 participants comprising 22.8% of those recruited. Table 1 summarizes the characteristics of the food secure and food insecure freshmen separately, and for the entire freshmen sample.
The gender distribution of the overall sample was about one-quarter female and three-quarters male. Their mean age was 18.5 years (± 1.04, range [18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33]. More than threefourths of the freshmen identified as white, not of Hispanic origin, refl the low level of racial diversity at the university, and about three-fourths were from two-parent households. Findings related to campus life indicated that almost the entire sample were full time students, on campus residents, and participated in a university meal plan. Economic data indicated that approximately two-thirds of the freshmen received financial aid, about three-fourths were unemployed, and their mean personal monthly income was $83.22 (± $259.35). The students' mean BMI (calculated from self-reported height and weight data) was 23.5 kg/m 2 (± 4.44, range 14.7-45.8); about three-fourths of the students were underweight or normal weight by BMI and about one-fourth were overweight or obese. When rating their eating habits since starting college, about 40% of the freshmen chose the "very unhealthy" or "unhealthy" responses while approximately 60% chose the "healthy" or "very healthy" responses, and when rating their health status, approximately 20% chose the "poor" or "fair" responses while about 80% chose the "good" or "excellent" responses.
---
Family Food Insecurity
The AFSSM scores indicated that 32 freshmen (7.1%) had experienced food insecurity at home during the year before starting college, while 424 (92.9%) were from food secure families. Gender-based comparisons revealed that 9.5% of the males and 6.3% of the females were from food insecure families. Additionally, 56% of the food insecure and 78.5% of the food secure students were from two-parent families, and 75% of the food insecure and 86.8% of the food secure students were white, not of Hispanic origin. The mean Coping Strategies Scale score for the 32 food insecure families was 2.3 (± 3.1, range 0-18) out of a possible 27 points. There was a significant correlation between family AFSSM scores and their scores on this scale (r= .52, p < .01), such that families experiencing more severe food insecurity used a greater number of coping strategies for accessing food. Table 2 shows the frequency counts and percentages, in descending order, of coping strategy use by food insecure families and by food insecure freshmen on campus.
The strategies used most often by the food insecure families were: stretched food to make it last longer (72.9%), A comparison of the rates of family and campus food insecurity revealed a significantly higher proportion of food insecure freshmen on campus (p < .01). Additionally, 14 (43.8%) of the 32 freshmen who had experienced food insecurity at home were also food insecure on campus. When comparing their food security status at home and on campus, 42.5% of the freshmen who experienced campus food insecurity believed that their access to food had worsened since starting college, and they believed that the most important reasons that explained this change were: my meal plan card runs out too soon (15.3%), I often spend money on nonfood items rather than using the money to buy food (13.3%), and Questions were modified to apply to a family or school setting, so some questions were only applicable in one of the situations. A comparable question was asked for the other situation purchased cheap, processed food (68.8%), and cut out food coupons (65.6%).
---
Comparisons of Food Insecure and Food Secure Students on Campus
The AFSSM scores indicated that 98 freshmen (21.5%) were food insecure at some point during their fi t year of college, and 358 (78.5%) were food secure. Among the food insecure freshmen, 24.3% were males and 74.9% were females, while among the food secure freshmen 31.6% were chased "often" by the food secure students were: entertainment (17.7%), school-related fees (16.6%), and make-up and fashion (14.3%). The correlation between the students' AFSSM scores and their Money Expenditure Scale scores trended toward significance (r = .09, p= .06), suggesting that the more frequently the students spent money on nonfood items, the more severe was their level of food insecurity.
The terms most often chosen by the food insecure freshmen to describe their feelings about their food access on campus were: fine/okay (22.4%), anxious (16.3%), worried (12.2%), and frustrated (12.2%), while those chosen most often by the food secure students were: fine/okay (21.9%), satisfied (21.6%), and secure (20.2%).
The findings concerning self-assessed eating habits since starting college and perceived health indicated that a greater proportion of food secure students (60.7%) than food insecure students (43.9%) regarded their eating habits as "healthy" or "very healthy" (p < .01), and that a greater proportion of food secure students (86.0%) than food insecure students (71.4%) perceived their health status as "good" or "excellent" (p < .01). A signifi diff ence emerged between the mean Meal Skipping Scale scores of the food insecure and food secure students, respectively, (5.8 ± 1.60, range 3-10 vs. 6.3 ± 1.41, range 3-9, p < .01) out of a possible 12 points, indicating that the food insecure students tended to skip fewer meals. Breakfast was the meal most often skipped by both food insecure (62.3%) and food secure (52.4%) students. Food consumption data indicated that food insecure and food secure students, respectively, consumed fruits/juice an average of 4.8 versus 4.7 days/week, vegetables/juice 4.9 versus 4.8 days/week, fast food 3.9 versus 4.1 days/week, and sweets 3.9 versus 4.2 days/week. No significant diff ences emerged between any of these mean food consumption scores.
---
Coping Strategies and Sources of Support Used by Food Insecure Students on Campus
The mean Coping Strategy Scale score for the 98 freshmen who experienced food insecurity on campus was 1.0 points (± 1.6, range 0-14) out of a possible 27 points, and a significant positive correlation emerged between the students' AFSSM and their scores on this scale (r = .26, p < .05), indicating that students who experienced more severe food insecurity used a greater number of strategies for accessing food. The three most frequently used strategies were: purchased cheap, processed food (18.4%), stretched food to make it last longer (16.3%), and shared groceries and/or meals with relatives, friends, or neighbors (15.3%). These food insecure freshmen identified the following sources as those that had off ed the most help in accessing food at school: parents (28.6%), friends (15.3%), and boyfriend or girlfriend (8.2%). They also identified the following items as those they thought would be most helpful in improving their food access: part time or full time job (19.4%), more affordable meal plan (18.4%), learn how to manage their money and make a budget (13.3%), learn how to shop for affordable, healthy food (12.2%), and learn how to eat healthy (11.2%).
---
Discussion
The freshmen in this study experienced food insecurity at a rate that was three times higher on campus compared to when they lived at home, suggesting that the problem of college student food insecurity begins during the freshman year. The present findings support those of Bruening [27] in documenting a high rate of food insecurity among first year college students and in identifying associated health concerns. The present findings also add to the Ample evidence from U.S. post-secondary campuses that college student food insecurity is a public health problem [20] that could compromise the students' mental and physical health [9,[11][12][13][14][15] and possibly jeopardize their academic success [22,23]. Accordingly, in the present study, smaller proportions of food insecure than food secure freshmen assessed their health status as either "good" or "excellent." Additionally, the food secure freshmen earned a significantly higher mean score on the Academic Progress Scale, suggesting that, for the food insecure students, their transition to college, class attendance, attention span in class, and ability to understand concepts taught may have been adversely impacted by the discomforts associated with hunger.
The considerably lower rate of family than campus food insecurity reported by the freshmen may have been partially attributable to parental coping strategies intended to protect their children from food deprivation at home, and that once their children moved away, these protective measures were more difficult to implement. Examples of such parental "buffering" activities reported in the food security literature include asking relatives for money and stretching meals to mitigate family food shortages [31,32]. Similar familial coping strategies were identified by the food insecure freshmen the year before starting college, i.e., stretching food to make it last longer and purchasing cheap, processed food. Subsequently, these same practices were used by the students on campus. Such dietary practices, likely learned at home, suggest that at times these students avoided the discomforts of hunger by consuming diets featuring foods high in fats and simple carbohydrates and low in protein, micronutrients, and fiber. Regular consumption of such energy-dense diets is risky since such eating habits could compromise the students' nutrient reserves and increase their risk for overweight and obesity in the long-term [13][14][15]. This speculation is supported by the findings that the food insecure freshmen, like their food secure peers, did not consume fruits or vegetables on a daily basis, consumed fast foods and sweets at least 3 days per week, and frequently skipped meals. Although such eating habits have been widely reported for college students in general [33,34], in the present study smaller proportions of food insecure than food secure students regarded their eating habits since starting college as either "healthy" or "very healthy."
The unhealthy dietary practices of the food insecure students in particular are of concern because these behaviors may have, in some instances, been due to food scarcity rather than to personal food preferences and busy lifestyles. In this regard, a greater proportion of food insecure than food secure freshmen believed that their food access had worsened since starting college. This belief was refl in the terms these students chose to describe their feelings concerning their food situation on campus, i.e., anxious, worried, and frustrated. Perhaps the reasons the fi ay descriptor was chosen most frequently were a reluctance to admit that they were unable to access as much food as they would like or to complain about their food situation. Two of the most frequently reported reasons for their worsening food security concerned fi constraints, i.e., the monetary value of their meal card ran out too soon and they lacked money to buy food. Similar fi were reported for food insecure college students in Alabama and Oregon, respectively [21,22]. It is also possible that the students' misuse of their limited funds may have played a significant role in their declining food access, given that they "often" spent money on nonfood items rather than using the money to buy food. To illustrate, 21% of the food insecure freshmen reported that they "often" spent money on entertainment.
The fi from this study indicate that the participating freshmen need, and have asked for, various kinds of assistance to improve their food access and diet quality. For example, the students requested learning opportunities that would teach them how to manage their money, make a budget, purchase nutritious, aff dable foods (whether using their meal cards on campus or using personal funds on or off campus), and make healthy food choices. They also suggested policies and programs they believed would improve their food access on campus, i.e., more part-time and full-time jobs and more aff dable meal plans. Community health professionals including Registered Dietitians, social workers, and health educators, are uniquely qualified to make positive contributions toward decreasing food insecurity and hunger among these young adults by implementing interventions and engaging in policy advocacy that address these student concerns. Additionally, offering similar programs to parents from food insecure households in community settings might assist these parents to provide healthy daily meals to their families. Lohse et al. [35] found that participation in such interventions enhanced the food budgeting and healthy meal planning skills of food insecure women.
---
Study Limitations and Strengths
This study had limitations that prevent the generalizability of the findings to the population of U.S. college freshmen, i.e., use of a nonprobability sample, data collection on a single campus located in a rural county, self-reporting of all measures, and overrepresentation of females and white students. Additionally, the small number (n = 32) of freshmen who reported family food insecurity made it difficult to identify relationships between family food security status and other correlates. This small number may have been attributable to the students' reluctance to disclose family food insecurity out of concern that their parents would be perceived as negligent or incapable, despite the anonymity of their responses. Nevertheless, the present fi add to the growing evidence that food insecurity is a serious health problem among freshmen and their families that deserves further study. For example, more research is needed with larger, more diverse samples in urban and rural communities to glean a better understanding of the scope of the problem and contributing factors in family and school settings. Research is also needed that evaluates the effectiveness of campus and community food assistance programs such as food pantries to determine whether they are being used by needy freshmen and their families and whether the food off ings are of the quality that promote healthy families.
---
Conflict of interest
The authors declare that they have no conflicts of interest.
Ethical Approval This research was not funded, and approval was obtained from the Office of Research Protections at the university prior to data collection.
Informed Consent An informed consent letter was included in the questionnaire prior to the first item. Students who did not wish to participate after reading this letter could exit the questionnaire by clicking on an "exit" button. | 26,639 | 1,814 |
e1c4b1697a015617e425405e0d1e2389c7392e57 | Having children outside a heterosexual relationship: options for persons living with HIV | 2,018 | [
"JournalArticle"
] | Currently 2217 (doesn't include 'further information' or 'references') All authors contributed to this paper with Dr Tristan Barber having overarching and final responsibility for collating individual works into the finished article. The Corresponding Author has the right to grant on behalf of all authors and does grant on behalf of all authors, an exclusive licence (or non exclusive for government employees) on a worldwide basis to the BMJ Publishing Group Ltd to permit this article (if accepted) to be published in STI and any other BMJPGL products and sub-licences such use and exploit all subsidiary rights, as set out in our licence http://group. bmj. com/products/journals/instructions-for-authors/licence-forms. | Introduction
This article presents information about the social, legal and medical issues that medical and non-medical practitioners in the UK i should consider in order to signpost options for people living with HIV (PLWH) who are not in a heterosexual relationship and want to become parents. Despite significant medical advances, increased medical awareness amongst HIV practitioners, and the ability to live a full life with HIV, stigma still exists around PLWH wanting to have children. There is a lack of awareness amongst the general public and the non-specialist medical community, about the realities of living with HIV, and the options available to become a parent.
Vertical transmission rates in the UK are very low (<0.5%) [1]. Despite this, even amongst PLWH it is evident that stigma surrounding parenting with HIV is real, with almost 50% of HIV-positive respondents in a European study saying that having HIV would be a barrier to them deciding to have a family [2]. Irrespective of their sexual orientation, HIV-positive parents and prospective parents may bear not only the brunt of an historical HIV stigma, but also the negative discourses that surround lesbian, gay, bisexual or transgendered/gender diverse (LGBT) parenting, despite the legal advances over the past decade.
First steps to breaking down this stigma are to increase public awareness around the realities of living with HIV, and awareness among PLWH that being a parent is an option for them. In 2016 in London, the UNAIDS 90-90-90 target was achieved for the first time. England came close to meeting that target, with 88% of those living with HIV being diagnosed, 96% of those on HIV treatment and 97% of them having an undetectable viral load [3]. Most PLWH taking antiretroviral medication therefore have undetectable levels of HIV in blood, meaning they cannot transmit HIV via sexual fluids [4].
Despite this, parenting is not always routinely discussed with PLWH. A recent study in London HIV clinics found that very few clinicians spoke with HIV-positive gay men about the possibility of having children [5].
Misconceptions about HIV transmission risk and medico-legal issues concerning reproduction may, thus, be rarely addressed. Education is also key to challenging stigma, and supporting the medical profession to better i The legal content is UK-wide (since it derives from the UK Human Fertilisation and Embryology Act 2008). Other aspects, such as NHS funding for fertility treatment, may vary between UK countries. advise HIV-positive patients is critical, as a medical appointment is often the first opportunity that people who are newly diagnosed have to think about future options.
---
Transmission
The key to conceiving a child and preventing transmission to an unborn baby in HIV positive parents lies within the evidence behind current understanding of viral load and the risk of HIV transmission.
PrEP (Pre Exposure Prophylaxis) is a new way of preventing HIV transmission. HIV negative people can take a tablet (containing two active drugs, tenofovir and emtricitabine) before they have unprotected sex. Taking PrEP has shown to be incredibly effective at preventing HIV acquisition [6,7,8]. PrEP is different to PEP (Post Exposure Prophylaxis). PEP is a medication regimen taken for 28 days after a risk of acquisition of HIV.
When a person first becomes HIV positive they will have a very high viral load. This makes the chance of transmitting the virus very high. When a patient commences therapy the viral load falls rapidly, the aim being becoming undetectable in plasma (<50 copies/ml). Once someone is undetectable their HIV is untransmittable
(U=U ii ) [4,9].
Both PEP and PrEP were considered useful tools to reducing HIV transmission around the time of conception in sero-discordant heterosexual couples. However since the adoption of U=U, their use is not recommended. So long as the HIV positive parent is undetectable PEP and PrEP are not recommended to safeguard the negative parent.
This has transformed the options PLWH have regarding parenting, although the ethical and legal frameworks for some options lag behind the evidence. For example, serodifferent heterosexual couples where the male partner is HIV infected are no longer advised to undergo sperm washing if the male partner satisfies U=U criteria, but when surrogacy or donor insemination is considered extra barriers remain in place for those affected by HIV. Some USA clinics use the 'Bedford Programme' [10] to allow HIV positive men to pursue conception via these routes but regulatory frameworks in the UK do not support this approach.
ii https://www.preventionaccess.org/consensus; accessed 26 th April 2018
---
Women and Fertility
Women living with HIV have been found to have reduced fertility which may be due to the fact that an increase in tubal factors is seen. Men living with HIV, especially if infection occurred around puberty, may have a reduced sperm count [11]. Couples are therefore advised to seek fertility investigations if they have tried to conceive for 6 months without success, or are known to have had previous pelvic infections.
---
What's possible?
There are a number of possibilities for PLWH to have biogenetically related children. Parental gender, relationship status and financial resources will impact on the available options.
---
Single Women
For single women, many UK fertility clinics offer treatment with donor sperm. The success rates for intrauterine insemination (IUI) are around 10% so many women opt to have in vitro fertilisation (IVF) which has better success rates, but is more expensive, particularly if paying privately. Local Clinical Commissioning Groups (CCGs) will have specific policies on what funding might be available through the NHS. For example, some CCGs will not fund fertility treatment for single women.
---
Single Men
The options for single men are not as straightforward. Although many UK clinics can provide access to treatment using donor eggs, a parental order (which reassigns parentage after surrogacy) can only obtained by two people, who have to be either married, in a civil partnership or living as partners. The law is currently being changed to allow single parents to apply for parental orders, with the changes (at the time of writing) due to come into force in late 2018 or during the course of 2019.
In the meantime there are other ways of obtaining parental responsibility for single parents and surrogacy is, in reality, an option for both single men and women.
Single people can also choose a co-parenting route. For example, a woman (the "birth mother") and man with whom she is not in a relationship can choose to have a child together -they do not need to be in a relationship but both people would be the legal parents of a child with all the responsibilities that that brings. Having a legal document that sets out parenting agreements and arrangements prior to a child's birth is a useful tool in these circumstances, as it offers protection to both parties. Additional infectious disease screening (for sexually transmitted infections and other blood born viruses such as hepatitis B and C) might also be necessary to remove the risk of infection through using a male co-parenting partner's sperm that the parent carrying the child wouldn't otherwise be exposed to. Provided U=U criteria are met then there is, of course, no risk of HIV exposure to the uninfected parent or to any conceived child.
A single man, or same sex male couples, can commission a surrogate host to carry their baby. The law in this area is complex and surrogacy agreements are not enforceable in the UK. Commissioning individuals or couples use donated eggs, which are inseminated with the sperm of either partner to create the embryos that are then transferred to a surrogate host. It is not possible to use donor sperm in this scenario because one person must be genetically related to a child before a parental order can be issued [12]. In the UK, fertility clinics can only legally use HIV negative sperm with a surrogate, so some parents go overseas for treatment instead, usually to the USA where there are established fertility treatment and surrogacy programmes for intended parents living with HIV. Parents will need to apply for a UK parental order after a child is born to secure legal parentage.
---
Couples
Same sex female couples can access treatment using donor sperm, but if they opt for IVF rather than IUI they can choose to explore one partner donating eggs to the other and vice versa. Despite this, for a woman living with HIV it is not possible, due to HFEA regulation, to provide eggs to her partner (or any other recipient) at a fertility clinic in the UK. There is a legal obligation to follow medical advice to minimise any risk of transmission to an unborn baby, and a fertility clinic will also need to consider the welfare of a child before treatment. If a UK clinic is used, both can be registered on the birth certificate as a child's legal parents if they sign the correct forms at the clinic before conception and the donor's rights are extinguished. If they conceive by artificial insemination elsewhere, then both can be recorded on the birth certificate if they were married or in a civil partnership at the time of conception [13].
Whilst full exploration of issues for those who identify as transgender or transitioning is outside the scope of this article, it is worth nothing that for these individuals options vary depending on whether they stored gametes before transition or not. Anyone storing gametes should have 'at the time of donation' infection screening. Unless this confirms HIV negative status, it is not possible to transfer embryos to a surrogate host or a co-parent in the UK in the future.
---
Funding
NHS funding for non-heterosexual parenting varies but is worth investigating in cases of known infertility. If NHS funding or private financial resources are not available some women egg share, giving them the chance to help others while receiving benefit in kind to fund their own treatment. Some clinics also offer men the option to sperm share. PLWH are not permitted to participate in egg or sperm sharing under UK regulations so these options are not available to them.
---
Options for fostering and adoption
Being HIV positive would not on its own limit an adoption or fostering assessment being undertaken or be a barrier to adopt or foster a child. Applicants who are single or in a same-sex relationships are encouraged to apply. However, no one has the 'right' to foster or adopt a child. Agencies assess applicants to ensure that all adopters and foster carers have the necessary qualities and experiences to care for children who have had traumatic and abusive experiences. The challenges that HIV applicants have handled successfully in their own lives may well be regarded as assets in the assessment process. The assessment includes health (including mental health) inquiries to ensure that applicants have a reasonable expectation of continuing good health and, in the case of adoption, the ability to support a child until adulthood. Although legally an HIV status does not need to be disclosed, in practice it is not ever advisable to keep it a secret, especially as the assessment process is built entirely on openness. A letter from an HIV specialist can provide the assessing local authority, the adoption medical adviser and the adoption panel with evidence about the health of the applicant(s) with HIV, including commenting on life expectancy, and this can also include information about the impossibility of HIV transmission from domestic contacts. The agency should only share an applicant's HIV status on a 'needto-know' basis, with informed consent. This is an issue that should be discussed with an assessing social worker.
---
Supporting PLWH parenting
Some of the policy and practice in relation to positive parenting appears to be out of step with the current scientific evidence as we have seen. In addition, the social, psychological and emotional implications of parenting among LGBT people living with HIV can be considerable, as parenting itself represents a significant change to identity. Becoming a parent can change one's relationship with one's partner, family and social environment, as well as the 'identity hierarchy' in that parenthood can become more important than other dimensions, such as one's occupational identity [14]. As with other stigmatised identities there is a high prevalence of poor mental health and childhood psychological adversity in HIV patients [15,16]. Strategies and interventions for promoting and enhancing social, psychological and emotional wellbeing are essential. Any potential psychosocial challenges of positive parenting could be addressed through counselling, mental health care and mutual social support from other positive parents. The recently updated BHIVA Standards of Care [17] may be referred to for more detail about expected levels of emotional wellbeing and support.
---
Summary
Many options are available for PLWH to considering parenting. Asking about this as part of routine care helps support destigmatising messages about normal life expectancy with HIV infection. Further work needs to be done to educate medical professionals and the wider public about the U=U message, and positive experiences of LGBT parenting. National guidelines and standards for HIV care should include resources to support PLWH choosing to parent, ensuring that parenting desire is enquired about and recorded. Ethical frameworks to support biological parenting for PLWH should be developed so that it is integrated into 'business as usual' service delivery.
---
Further information
More information is available at a newly launched resource hub at www.hivandfamily.com, spearheaded by The P3 Network (www.thep3network.com) as part of its 'Positive Parenting' campaign, with the key message that 'HIV doesn't define a parent's power to love'. The campaign and resource hub was backed by organisations including the British HIV Association, Children's HIV Association, Terrence Higgins Trust and clinicians at the Royal Free and Chelsea and Westminster NHS Foundation Trusts. | 14,269 | 723 |
a951f27028e3aabec6c6948e5e4e9a1d0807c406 | Development and Validation of the Adolescent Triangulation Scale | 2,023 | [
"JournalArticle",
"Review"
] | Triangulation is conceptualized as the involvement of a third person in a dyadic relationship in order to balance excessive conflicts, intimacy, and distance and provide stability within the system. A self-report scale to measure adolescents' triangulation into inter-parental conflicts was developed, and psychometric properties of the scale were established. The study was conducted in a three-phase format. Data was collected from adolescents (10-19 years) of different schools and colleges in Pakistan. In Phase I, items were generated through a literature review and focused group discussion. In Phase II, four latent factors (Pushed out, pulled in, mediator, balanced) were extracted through EFA (N=493). Phase III comprised a test of dimensionality, reliability, and validity. The dimensionality of the Adolescent Triangulation Scale was established through CFA (N=494). Reliability of the scale was established through Cronbach alpha (α= .87-.90) and composite reliability (CR= .88-.92). Furthermore, the validity of the scale was assessed through Average Variance Extracted (AVE= .55-.69), Maximum Variance Shared (MVS= .88-.93), Fornell and Lacker criterion and Hetro-trait-Mono-trait Criterion. Results showed that the Adolescent Triangulation Scale appears to have good psychometric properties and contributes to the literature on family systems theory by allowing for a more nuanced measurement of triangulation than was previously available. | Introduction
In the realm of family dynamics, understanding the complexities of adolescent involvement in parental conflicts is paramount for comprehensive psychological research. This research delves into the development and validation of a novel instrument, the Adolescent Triangulation Scale (ATS), designed to meticulously measure and quantify the complex phenomenon of adolescent triangulation. Triangulation, referring to the involvement of a third party in the relationship between two others, is a concept deeply rooted in family systems theory. The scale's construction is informed by theoretical frameworks proposed by scholars such as Kerr and Bowen and Bell et al. (2001). offering a nuanced perspective on the complex roles adolescents play within familial disputes. Through a meticulous process of item development, expert evaluation, pretesting, and statistical analyses, this study presents a robust scale that not only encapsulates the multidimensional nature of adolescent triangulation but also ensures its validity and reliability. The research aims not only to provide a valuable measurement tool for future studies but also to contribute significantly to the evolving landscape of family psychology, particularly in understanding the dynamics of adolescent involvement in parental relationships.
---
Triangulation
Triangulation, a concept fundamental to family systems theories, refers to the process of involving a third person in the association of two others. This third person could be anyone from children, parents, grandparents, therapists, friends, or even pets (Kerr & Bowen, 1988). Early family therapy pioneers, such as Bowen, emphasized triangulation as a means to reduce anxiety in dyadic relationships by bringing in a third party.
The Triangle as a Fundamental Unit Bowen (1988) conceptualized the emotional triangle as the fundamental unit of an emotional system. Unlike psychoanalytic oedipal triangles, which focus on sexual issues, Bowen's emotional triangles explain a broader emotional process within relationships. These triangles stabilize relationships during both calm and tense times by dispersing stress and anxiety over the three corners of the triangle.
---
Interlocking Triangles
In families with more than three members, the concept of interlocking triangles arises. For example, in a nuclear family where a father is in conflict with both the son and daughter, the tension may indirectly affect the mother. Kerr and Bowen (1988) propose that while a fundamental triangle may suffice during calm times, increasing anxiety leads the fundamental triangle to interact with other family triangles, even at the societal level.
---
Conceptualization of Triangulation
Bowen (1978) posited that triangulation occurs in response to three system-level processes or interactions within families:
---
Inter-parental Conflict
Both overt and covert conflicts lead to triangulation. Covert conflicts, equally harmful, may drive parents to involve children in the parental dyad to resolve their issues (Bradford et al., 2019).
---
Lack of Differentiation of Self or Family Fusion
When self-differentiation is low, fusion increases, resulting in undifferentiated family ego masses. Triangulation emerges as a symptomatic product of spreading tension in dyadic relationships.
---
Parent-Child Alliances
Power struggles or alliances between a child and one parent against the other may occur due to neglect or dysfunction in the marital dyad. This type of triangulation can lead to various difficulties for the child.
---
Present Study
The present study aimed to develop an indigenous instrument to measure adolescent triangulation in inter-parental conflicts. The scale was developed in the native language of Urdu so that the majority of the population would understand and respond accurately. Specific objectives include the development of the indigenous Adolescent Triangulation Scale, the establishment of its factorial structure, and rigorous testing of its reliability and validity.
The rationale behind developing the Adolescent Triangulation Scale (ATS) for the Pakistani population stems from the need to investigate how adolescents in Pakistan navigate inter-parental conflicts. Triangulation, commonly understood as a third-party involvement in the relationship between two individuals, has been a topic of interest in family systems theories (Minuchin, 1974;Satir & Baldwin, 1983;Haley, 1987;Kerr & Bowen, 1988). Despite global research on triangulation, its exploration in Pakistan remains limited. Cultural norms and religious values in collectivistic Eastern societies, like Pakistan, may influence how adolescents perceive and experience triangulation differently from their counterparts in Western societies. This study aims to fill this gap by designing a valid and reliable instrument tailored to the Pakistani context (Bresin et al., 2017;Bray et al., 1984;Grych et al., 1992;Perosa et al., 1981). The development process involves a thorough literature review, focus group discussions, content analysis, and rigorous psychometric testing, ensuring cultural sensitivity and applicability to the unique dynamics of Pakistani families (Boating et al., 2018;Kohlbacher, 2005;Lawshe, 1975).
---
Method
The development of the adolescent's triangulation scale (ATS) is based on the guidelines outlined by Boating et al. (2018). While following this guideline, the present research aimed to develop a psychometrically sound multidimensional scale. The steps of Scale Development, as suggested by Boating et al. (2018), are as follows; Phase I: Item Development
The creation of items for scale development is a critical step in the development of a reliable and valid measuring instrument. The following are the general steps in item development.
Domain Identification. Triangulation was explored by all of the main family systems theorists. However, the researcher in the current study concentrated on Bowen family systems theory (Kerr & Bowen, 1988) while establishing the Adolescent's Triangulation Scale (ATS) principally because it provides an elegant and comprehensive theory of the family system and is still presently used extensively and effectively in clinical work (Gavazzi & Lim, 2023). Before developing items, an extensive literature review regarding descriptions, examples, types, and definitions of triangulation from Bowen (1978), Kerr and Bowen (1988), Bell et al. (2001), Klever (2008), LaForte (2008), Titelman (2008) and Gavazzi and Lim (2023) was done.
Item Generation. For item generation, both deductive and inductive methods were used, as suggested by Clarke and Watson (1995). The deductive technique includes a review of the literature as well as an evaluation of current triangulation scales. The qualitative data gained from focus group discussion is used in the inductive technique.
---
Literature Review
At the first stage of item generation, literature regarding triangulation and its types was thoroughly reviewed. To access the literature review, updated and authentic research journals and databases were consulted (e.g., Buehler & Welsh, 2009;Buehler et al., 2009;Amatoa & Afifi, 2006;Franck & Buehler, 2007). Moreover, some scales/questionnaires devised to study triangulation were also approached. Possibly the most commonly used measures of triangulation are two subscales of Personal Authority in the Family System Questionnaire (PAFS-Q; Bray, Williamson, & Malone, 1984), i.e., the Intergenerational Triangulation (INTRI) and Nuclear Family Triangulation (NFTRI). However, The Triangulation subscale of Children's Perception of Inter-Parental Conflict Scale (CPIC; Grych et al., 1992), The Structural Family Interaction Scale (Perosa et al., 1981), and The Triangular Relationship Inventory (Bresin et al., 2017) were also used to measure family triangulation. All the scales were carefully assessed.
---
Focused Group Discussion
The primary aim of Focus Group Discussions was to explore the concept of triangulation within the Pakistani population, a novel focus in the local research culture. Four focused group discussions were conducted. The first group comprised six girls (14-18 years) from both nuclear and extended families, with a minimum education level of middle. The second group involved five boys (15-19 years) from nuclear and extended families, also with a minimum middle education level. The third group consisted of six mothers (38-49 years) from nuclear and extended families, including housewives and working women, all having completed at least their school education. The fourth group involved seven fathers (42-50 years) from nuclear and extended families, all having completed at least their school education. Participants were formally introduced to each other, and the purpose and objectives of the focus group discussions (FGDs) were clarified. A semi-structured focus group guideline was used to explore participants' perspectives on triadic relationships. The researcher served as a moderator.
---
Content Analysis
In order to generate codes, themes, and sub-themes, content analysis was performed. The results provide valuable insights into the dynamics of parental relationships and their impact on children, contributing to a deeper understanding of family dynamics and relationships. The concept of triangulation aligns dimensionally with Kerr and Bowen's (1988) theoretical model. The qualitative report reveals major themes, i.e., Pushed-Out, Mediator, Balancing, and Pulled-In, providing insights into parental dynamics.
The Pushed-Out theme underscores parents' child-centric focus, prioritizing children's well-being and shielding them from conflicts. The Mediator theme highlights children's active role in improving parental relationships through communication and cooperation. The Balancing theme emphasizes a parental approach to independently managing conflicts fostering a peaceful family environment. The Qlantic Journal of Social Sciences and Humanities | Volume 4, No. 4 (Fall 2023)
Pulled-In theme delves into instances where children are inadvertently involved in parental issues, exploring sub-themes like manipulation and emotional dependence. Overall, these themes contribute to a nuanced understanding of family dynamics and relationships, shedding light on the intricate interplay of parental behaviors and communication in consideration of children's well-being.
---
Generating Initial Item Pool
The synthesis of literature findings and FGD data led to the formulation of an initial item pool comprising 40 items, conceptualized from the four key dimensions of triangulation delineated by Bell et al. (2001): (a) balanced, (b) mediator, (c) pulled-in (cross-generational collation), and (d) pushed-out (scapegoating). This iterative process ensured that the item pool was not only theoretically grounded but also culturally relevant, setting the stage for subsequent psychometric validation.
---
Establishing Content Validity
To assess content validity, Lawshe's method (1975) was applied, engaging eleven specialists well-versed in family system theory, particularly triangulation. Each expert evaluated the 40 items individually, categorizing them as essential, useful but not essential, or not essential. The Content Validity Ratio (CVR) cutoff score, set at 0.63 for 11 raters, was employed. A total of 34 items, with at least eight items per theoretical domain, met the CVR criteria and were retained. Face validity was also affirmed as experts deemed all items appropriate.
---
Phase II: Scale Development Scaling Method
The Adolescent Triangulation Scale utilized a five-point Likert-type scoring system aligned with the approach recommended by Krosnick and Presser (2009) to effectively capture individual response variations. Respondents provided feedback on the scale using a 5-point Likert-type format, where 1 signified strong agreement, and 5 denoted strong disagreement.
---
Pretesting Questions
Following item development and content validity ratio establishment, cognitive interviews with five adolescents were conducted to identify confusing or problematic questions. The feedback indicated that all 34 items in the Adolescent Triangulation Scale were succinct and easily comprehensible, with participants reporting no difficulties.
---
Sample
The sample for this phase comprised of 494 adolescents (boys = 230, girls = 264) aged 10 to 19 years (M=17.65, SD=2.17). The sample included students from government (n=284) and private (n=210) schools and colleges in Rawalpindi and Islamabad. A convenient sampling procedure was employed, excluding adolescents with single parents, those living independently, complete illiteracy or diagnosed with mental or physical disabilities.
---
Sample Suitability
Bartlett's test of sphericity (χ 2 (351) = 8502.54, p <.000) signified the suitability for factor analysis. The Kaiser-Meyer-Olkin (KMO) value of 0.91, exceeding the recommended threshold, indicated the data's appropriateness for factor analysis.
---
Extraction of Latent Factors
Exploratory Factor Analysis (EFA) was conducted to unveil the factorial and dimensional structure of the 34 items. Principal-axis factoring (PAF) analysis revealed a five-factor model initially, explaining 58.73% of the total Variance. However, considering eigenvalues and a scree plot, a more condensed four-factor model was contemplated, recognizing potential adjustments to enhance the scale's precision. Understanding the latent construct faced challenges due to disparities in interpretations between eigenvalues and the scree plot. Consequently, a meticulous evaluation of individual items became imperative for potential removal, guided by factor loadings, cross-loadings, and communality estimates. Pett et al. (2003) criteria were employed: items with factor loadings below .40 were deleted, and those with cross-loadings exceeding .32 on multiple factors were considered for removal. Seven items were eliminated, leading to a final set of 27 items. Another iteration of principal-axis factoring was conducted, revealing a four-factor model explaining 58.11% of cumulative Variance.
---
Factor I: Pushed-Out Triangulation
Eight items (29.72% of total Variance). Pushed-out triangulation reflects aspects of scapegoating, where adolescents assume a pushed-out position. Moreover, measures a form of triangulation wherein parents shift attention to different aspects of the adolescent's life instead of focusing on marital conflicts.
Factor II: Mediator Triangulation Seven items (12.41% of total Variance). Mediator Triangulation centers around the adolescent feeling caught between parents' marital disputes. Emphasizes the adolescent's role as a middle person in parental relationships, with a maximum factor loading of .87.
---
Factor III Balanced Triangulation
Five items (9.76% of total Variance). Balanced triangulation represents a healthy relationship where parents take responsibility for their relationship problems. Emphasizes a balanced dynamic, with the highest factor loading being .78.
Factor IV: Pulled-In Triangulation Seven items (6.19% of total Variance). Pulled In Triangulation explains aspects of cross-generational collation, depicting an alliance between the adolescent and one parent against the other. It captures a power struggle between parents, highlighting a type of triangulation involving parental conflict. This refined four-factor solution provides a clearer and more in-depth understanding of adolescent triangulation, addressing various dimensions within parental relationships and their impact on adolescents. Factor loading of each item on all four factors is shown in Note. The scale was originally developed in the Urdu language, and an un-standardized translation was provided here for the purpose of understanding First Order CFA
The first-order Confirmatory Factor Analysis (CFA) of the Adolescent Triangulation Scale involved testing the predefined factor structure through statistical methods. The analysis utilized a 27-item pool to test the ATS four subscales measurement model, and all items were allowed to load on their specified factor as suggested by the results of EFA. Note. CFI = Comparative Fit Index, GFI = goodness-of-fit index, TLI = Tucker-Lewis Index, RMSEA = Root Mean Square Error of Approximation
Table 2 shows that the chi-square values for ATS were significant for initial model 1 as well as modified model 2. However, Bentler (2007) suggested that with a large sample size, the chi-square test's assumptions give an inaccurate probability. Therefore, the decision of model fit was made on goodness of fit indices other than chi-square. Results reveal that model 1, i.e., an initial test of the ATS shows a poor fit: (χ 2 = 1248.19, df = 318, CFI = .92, RMSEA = .07, SRMR = .04). In order to fit the model, all items were inspected for standardized regression weights. As Hulland (1999) and Henseler et al. (2012) suggested, all the items whose standardized factor loading falls between .40 and .70 should be considered for deletion. Therefore, item no. 34 (factor loading = .63) and 13 (factor loading =.68) were deleted. Furthermore, item no. 26 was also deleted, as suggested by the modification index. Standardized factor loadings from this model are shown in Table 3.
However, mild revisions were also done with the help of error co-variances. Based on the suggestion of modification indices and content overlapping, error co-variance was added to the error terms of the same general factor. This was done to get an excellent fit. Our revised model showed a considerably enhanced fit indices (χ 2 = 700, df = 243, CFI = .95, RMSEA = .06, SRMR = .03). Second Order CFA of Adolescent Triangulation Scale
Second-order confirmatory factor analysis is used to interpret ATS as multi-level and multidimensional by combining its four dimensions, namely pushed-out, pulled-in, Mediator, and Balanced triangulation, under the umbrella of a common higher-level factor, namely adolescent triangulation, into inter-parental conflicts. Table 3 shows the chi-square degree of freedom and model fit indices for ATS second-order CFA.
---
Indicators Reliability
The indicator's reliability is assessed through standardized regression weight and squared multiple correlations of all the items of the adolescent's triangulation scale. Table 4 shows the factor loading and R2 for all 24 items retained after the CFA model fit. Results in Table 4 show that factor loading (λ) is well above the cutoff score of .70. and is significant at a 5% level of significance. Results indicate that each item's dependability was high, which supports the placement of each item on the designated latent construct. The R 2 values for ATS items range from moderate to high, i.e., .61 to .85.
---
Internal Consistency, Convergent and Discriminant Validity of ATS
In order to assess internal consistency, convergent and discriminant validity of the newly developed Adolescent Triangulation Scale, Cronbach alpha, composite reliability, average Variance extracted, Mean Shared Variance, MaxR(H), and HTMT were computed and reported in Table 5. Cronbach alpha and composite reliability are commonly used to assess an instrument's internal consistency. Results in Table 5 show that the value of coefficient alpha ranged between .90 and .92, whereas the values of CR ranged between .92 and .94. The values of both parameters are well above the suggested cutoff values. Therefore, all the four subscales are considered to have good internal consistency. MaxR (H) values were also observed to be greater than the values of CR and hence provide a piece of evidence for construct validity. Average Variance Extracted is used to report the convergent validity of ATS. Results show that the value of AVE for all four subscales is well above the suggested cutoff value, i.e., AVE > .50. The value of AVE ranged between .65 to .76. The Value of CR is also well above the suggested cutoff point, i.e., CR > .60.
Furthermore, the discriminant validity is evaluated by using the Fornell and Larcker (1981) criterion as well as the cross-loading of indicators. Results show that all the items have factor loading greater than .70 on their respective factor. The cross-loading of all the items on other factors is less than .40 and hence fulfills this criterion of inclusion in the final scale. In Table 5, values in parenthesis present the HTMT ratio of correlation between two constructs: given as .35 .34 .31 (mediator & balanced). As a result, all of the HTMT values are less than 85, indicating that the constructs are distinct and discriminant validity may be stated to have been demonstrated.
Discriminant validity is also proven by taking AVE values larger than the relevant maximum shared variance (MSV) into account. (Hair et al., 2014). Results showed that the values of AVE for all three constructs are greater than their respective MSV and hence provide more evidence for discriminant validity. The result shows that Fornell-Lackers criterion of discriminant validity was also satisfactory as the correlations among all latent constructs are smaller than the square root of each construct's AVE. Furthermore, Results show that CR for all subscales of ATS are above .70 and the AVE values are within .64 and .73. Overall, discriminant validity can be accepted for this measurement model.
---
Discussion
The research investigates adolescent triangulation in inter-parental conflicts, a concept often overlooked in family systems theories. Triangulation involving a third person in a relationship has received theoretical attention, but quantitative assessments are scarce. Notably, Bresin et al. (2017) and others have explored triangulation globally, yet Pakistan, with its distinct cultural norms, remains largely unexplored. The study seeks to bridge this gap by creating a reliable instrument tailored to the Pakistani context. Anticipating cultural differences, the research acknowledges that Pakistani adolescents may experience triangulation differently than their counterparts in more individualistic societies. This study addresses the dearth of measurement tools, aiming to enhance understanding in a cultural context where taboos and collectivistic norms shape interpersonal dynamics. The research underscores the need for culturally sensitive instruments in exploring adolescent triangulation.
In order to attain the above-mentioned objectives, the study was conducted in three different phases, as suggested by Boating et al. (2018). It started with item development by getting detailed information about "triangulation" (Buehler & Welsh, 2009;Buehler et al., 2009;Amato & Afifi, 2006;Franck & Buehler, 2007) that was aimed to be operationalized in the present study. As the first step of the scale development, pertinent literature about the concept of triangulation was thoroughly reviewed. This review of the literature helped the researcher develop focused group guidelines for the exploration of the triangulation phenomenon as experienced by adolescents. By taking into account the main viewpoints extracted from previous literature, the researcher was able to develop clear, easy, simple, short, and open-ended questions about adolescent triangulation. Four focus group discussions were conducted with adolescents and parents in this phase of the present study. After conducting the FGDs, the researcher was able to screen salient information about the fundamental aspects of adolescent triangulation. In the next step, the obtained data were transcribed using a simple transcription method by Kuckartz et al. (2014).
After transcribing the data, content analysis, following Kohlbacher's (2005) guidance, was applied, providing a comprehensive insight into adolescent triangulation. The results revealed that a majority of adolescents experienced involvement in inter-parental conflicts. Some positioned themselves as mediators, acting as an anchor to maintain parental connections, while others felt compelled to take sides under parental pressure. Interestingly, some adolescents perceived themselves as the focal point, receiving undivided attention from parents who seemed to forget their conflicts. Additionally, opinions varied, with some parents and adolescents suggesting that parental issues could be resolved without involving children. Based on these findings, a 40-item pool was generated to measure triangulation, expressed in clear Urdu language. The items underwent rigorous review by eleven experts from psychology departments at Qlantic Journal of Social Sciences and Humanities | Volume 4, No. 4 (Fall 2023) Rawalpindi Women's University and International Islamic University, Islamabad, ensuring the validity and reliability of the developed instrument. Lawshe's method (1975) was employed to assess content validity. Eleven experts evaluated items for clarity, conciseness, reading comprehension, face, and content validity. Their recommendations led to refining the initial 40 items, retaining 34 with excellent content validity and substantial face validity.
The second phase involves the scale development. Before the initial tryout, a cognitive interview was done with five adolescents to identify if any item was confusing, problematic, or difficult to answer. All 34 items were found to be straightforward and comprehendible. After finalizing the item pool, the scale was administered on a purposive, convenient sample of 494 adolescents. Data collected from this sample were subjected to descriptive statistics and factor analysis for the assessment of their psychometric properties and factorial structure. To assess the factorability of a correlation matrix of the scales, many wellestablished criteria were utilized, including Kaiser's criterion approach, principal-axis factoring (PAF) analysis, and Cattle's scree test. All variables that were loaded on various factors measured distinct constructs. The rotational factor pattern defined a basic structure with strong loadings on one factor and modest loadings on the other factors. Because of low factor loadings or cross-loading, 7 of the 34 Adolescent Triangulation Scale items were eliminated based on EFA.
The 27 retained items demonstrated communalities exceeding three, forming a cohesive four-factor solution reflecting pushed-out, pulled-in, mediator, and balanced triangulation. These findings underscored the Adolescent Triangulation Scale's (ATS) validity and reliability. The instrument supported a four-factor structure aligning with established triangular typologies. The final ATS, comprising at least four items per subscale, ensured a balanced representation of the sub-dimensions. Additionally, the alpha coefficients, exceeding .80 for ATS and its subscales, signaled satisfactory internal consistency. This robust validation process solidified the ATS as a dependable tool for assessing adolescent triangulation in interparental conflicts. The adolescent Triangulation Scale (ATS) has 27 items and is comprised of four subscales, i.e., pushed-out, pulled-in, mediator, and balanced. Total triangulation scores were obtained by reverse scoring the items of balanced triangulation, i.e., item no. 1-5.
To confirm the factorial structure of the scale developed through EFA, first and second-order Confirmatory Factor Analysis (CFA) was conducted on a new sample of 493 participants. The initial firstorder CFA, testing the four-factor solution suggested by EFA, showed a poor fit. To enhance the model fit, items were scrutinized for standardized regression weights. Following suggestions by Hulland (1999) and Henseler et al. (2012), items with standardized factor loadings between .40 to .70 were considered for deletion. Consequently, items 34, 13, and 26 were eliminated based on these criteria and recommendations from modification indices and the committee. Mild revisions were made using error co-variances, guided by modification indices and content overlapping. Error co-variances were added to the same general factor to achieve an excellent fit. The revised model demonstrated significantly improved fit indices, supporting a robust four-factor structure according to the first-order factor analysis. Deleted items, along with their unstandardized English translation, follow; Item 26. When one of my parents is not present, the other uses bad words about him/her The second order CFA was performed with the remaining 24 items of ATS. Second-order CFA was done to determine the total construct of triangulation. The balanced subscale has a negative association with total scores, whereas pushed-out, pulled-in, and mediator triangulation has a positive association with ATS total. Furthermore, the internal consistency of the ATS total, as well as all the subscales, was within the satisfactory range. Factor loading and squared factor loadings were above the minimum cutoff point, indicating the indicator's reliability. Composite and Cronbach reliability was also above the minimum acceptable range, indicating excellent internal consistency of the newly developed scale. Moreover, average Variance extracted (AVE), Fornell and Larcker criterion, Heterotrait-Monotrait (HTMT) correlation ratio, and maximum shared Variance suggested good convergent and discriminant validity of ATS.
---
Limitations and Suggestions
The study on scale development for measuring adolescent triangulation into inter-parental conflicts exhibits a few limitations. Firstly, the research was conducted in a specific cultural context (Pakistan), limiting the generalizability of the findings to diverse cultural settings. Additionally, the reliance on selfreport measures introduces the potential for response bias. Future studies could benefit from incorporating more diverse samples and employing a multi-method approach to enhance the robustness of the developed scale. Longitudinal designs could provide a more nuanced understanding of the dynamics of adolescent triangulation over time. Furthermore, exploring the scale's applicability in various cultural contexts would enhance its cross-cultural validity. Addressing these limitations would contribute to the refinement and broader utility of the developed scale. | 30,230 | 1,455 |
c77ebad39105771a5da2fdf6ea7836898399870c | The Food Insecurity Issues in Gastronomy Tourism among Local and International Tourists in Malaysia | 2,024 | [
"JournalArticle"
] | The objectives of this study are to investigate the food security issues arising in gastronomic tourism, to verify the food insecurity experiences encountered by tourists, and to determine the tourists' dining satisfaction from the gastronomic tourism experiences in Malaysia. A quantitative approach was selected for this study. These issues were concluded from the data collection via questionnaire forms disseminated online through multiple social media platforms consisting of 250 participants of both local and international tourists visiting Malaysia. The Independent T-test and Mann-Whitney test were used as the main statistical test to establish if any tourist groups had food security-related issues during their visit. The results showed that local tourists are more likely to be affected by food security issues, food insecurity, and dining experiences. Overall, this study discovered that both local and international tourists have contrasting experiences in gastronomy tourism in Malaysia. | INTRODUCTION
Food security, as articulated by the World Health Organization, is "when all people at all times have access to sufficient, safe, nutritious food to maintain a healthy and active life". Nevertheless, the emergence of COVID-19, war, and significant climate change has adversely affected global food production and distribution, ultimately leading to a global food crisis. In the current landscape, emerging food security issues are causing the tourism industry to collapse. The issues consist of escalating food supply costs (Jalaluddin et al. 2022) and insufficient food supplies due to overdependency reliance on imported goods due to insufficient domestic production (Ahmed & Siwar 2013).
This study aims to address the pressing issue of food security within gastronomy tourism, particularly concerning the scarcity of food supplies in Malaysia. This scarcity has resulted in price surges and limited food accessibility, impacting local and international tourists. As Hashim et al. (2019) outlined, the annual escalation of food expenses further exacerbates existing food security issues. Additionally, the growing number of tourists intensifies the severity of food security issues, and necessary governmental management is required to meet demands (Hashim et al. 2019) adequately. This study will help in recognizing the difficulties faced by both local and international tourists regarding food security-related issues, thereby revealing the current state of food insecurity in Malaysian gastronomy tourism in Malaysia.
The objectives of this study include investigating the food security issues in domestic tourism among local and international tourists, verifying the food insecurity experiences encountered by local and international tourists, and determining the tourists' dining satisfaction from the gastronomy tourism experiences in Malaysia.
In essence, this study strives to comprehend better the struggles local and international tourists encounter when it comes to food security and accessibility in Malaysia due to a variety Ismail et al.
of factors, including food supply scarcity due to livestock shortages, rising food prices driven by demand and supply imbalance, as well as the satisfaction (Gani et al. 2017) and contentment of visitors concerning food consumption and accessibility while visiting.
---
METHODS
---
Design, location, and time
A quantitative approach was adopted for this study as its research design since it aligned seamlessly with the study's objectives. Additionally, cross-sectional and nonexperimental methods were utilised to determine emergent food security issues within the population. The study was conducted across the entirety of Malaysia, involving both local and international tourists. Data collection was expedited through the distribution of Google Form link via various social media platforms such as Facebook, Twitter, TikTok and YouTube. An informed consent was also been asked to the respondents before they continue to fill in the online Google Form.
---
Sampling
Quota sampling was selected to ensure that the respondents accurately represented local and international tourist groups by meeting the inclusion and exclusion criteria. The inclusion criteria for this study consisted of Malaysian citizens as local tourist respondents, foreign visitors to Malaysia as international tourist respondents, and the participants have consumed local cuisine during their Malaysian visit. As for the exclusion criteria, participants were excluded if they were Malaysians residing in other countries, foreigners residing within Malaysia, or participants who did not purchase or consume local cuisine. A sample size of 250 people, inclusive of both local and international tourists, was designated for the study, and the determination of sample size was facilitated through G*Power software for a two-tailed independent t-test, which indicated a minimum sample size of 210 individuals. To anticipate potential missing data during analysis, an additional 40 participants were included. The respondents' nationality was identified before approaching them to facilitate the grouping process.
---
Data collection
Data collection centered on a questionnaire as the primary source of data from the samples, and it was developed by adapting the questions from previous research. The questionnaire, consisting of 39 questions divided into 4 sections, utilized the Likert scale to inquire about the respondents' opinion regarding the food security situation where 1 (least valued) and 5 (most valued). Before distribution, a validity assessment was conducted on the questionnaire using Cronbach's alpha to ensure its appropriateness for distribution to respondents. The Cronbach's Alpha value obtained was 0.954, indicating a very high level of internal consistency for the scale used. This assessment was based on a total of 33 items.
---
Data analysis
The data were analysed using IBM SPSS 27. Categorical data was presented as frequency and percentages, whereas numerical data underwent descriptive analysis and was presented in mean and standard deviation or median and interquartile range depending on the normality distributions of the data. Independent t-test and Chi-Square or the Karl Fischer test were applied to achieve both objectives in this study. Not normally distributed variables were analysed using the Mann-Whiteney test. The statistical significance for this study was p<0.05.
---
RESULTS AND DISCUSSION
Based on the data gathered from the questionnaire, each section underwent individual analysis encompassing descriptive analysis, normality test, and inferential analysis, which were the independent t-test and the Mann-Whitney test.
Table 1 presents insights into the demographic backgrounds of the respondents consisting of their gender, age, education level, nationality, occupation, and average annual income.
Table 2 illustrates the mean score for attributes about food security issues with 'the food available is enough for the tourists to order during peak seasons' (4.16) receiving the highest rating while the lowest rated is 'the prices are reasonable ' (3.44).
Mean difference in food security issues between local and international tourists which Food insecurity issues in gastronomy tourism in Malaysia proved to be statistically significant (p=0.022; 95% CI: 0.03-0.41). The mean score attributed to international tourists (3.96) exceeded that of local tourists (3.74). This observation shows that the attributes associated with the food security issues had a noticeable impact on local tourists.
The initial hypothesis noted that the food security issues in Malaysia were substantial, and the result of this study confirmed that hypothesis across various aspects such as the food prices, hygiene conditions, and the adequacy of the nutrient content of the food prepared. However, the study outcomes revealed that local tourists were facing challenges to a greater extent than international tourists as they were the ones taking the toll from the factors that contributed to the escalation of food security issues. In essence, the food security issues that are currently affecting gastronomy tourism in Malaysia have unfortunately become a discouragement to the local tourists from enjoying a vacation within their homeland.
The tourists were prompted to express their level of agreement concerning their encounters with food insecurity experiences during their visit to Malaysia. As depicted in Table 3, the mean score for the attributes in food insecurity experiences is revealed. Notably, the attribute with the highest mean score was 'there are varieties of local specialities available' (4.16), signifying positive feedback among tourists. In contrast, the lowest valued attribute was 'all items from the menu are available when requested ' (3.44).
The outcomes of this study affirm that both local and international tourists encountered food insecurity during their stay. However, it is noteworthy that the local tourist group exhibited a lower mean score, indicating that they were vulnerable to these experiences compared to the international tourists. Several aspects fell below expectations in contributing to the gastronomic experience of the tourists, which led to the food insecurity experiences. This implies that the tourists within Malaysia were having a less pleasurable experience of the gastronomic scene in Malaysia as their needs in terms of food were not being fulfilled during their holiday, hence confirming the hypothesis.
The last section of the questionnaire asked respondents about their dining satisfaction while purchasing and consuming food in Malaysia. Table 4 shows the mean score for the attributes in dining satisfaction, where the highest rated being 'As a whole, Malaysia is a good food tourism destination' (4.38) and the lowest rated being 'The food fulfils the dining experience in terms of hygiene and sanitation.' (3.58). Due to non-normal data distribution, the Mann-Whitney test was used to analyse dining satisfaction between local and international tourists. It shows the comparison of mean rank and sum of ranks between local and international tourist groups, which indicates that the international tourist group has a larger mean rank (156.21) than the local tourist group (119.30). The statistical significance of the Mann-Whitney U test (p<0.003) between local and international tourists, indicating higher dining satisfaction among international tourists than local tourists.
The results revealed a Mann-Whitney U value of 3,078.00. The test statistic, denoted as Z, was found to be -3.020. This result was statistically significant, as indicated by an asymptotic significance (2-tailed) of 0.003. This suggests a notable difference in dining satisfaction between local and international tourists confirms the statistical significance of the Mann-Whitney U test (p<0.003) between local and international tourists, indicating higher dining satisfaction among international tourists than local tourists.
As conveyed by the respondents, the evaluations of dining satisfaction illuminate Food insecurity issues in gastronomy tourism in Malaysia a distinct contrast between the experiences of international and local tourists. It was priorly speculated that both groups were satisfied with their dining experience, but the result showed a significant difference. It was deemed that some attributes listed under this variable, such as hygiene and comfort, were less agreeable to the local tourists, hence, the lower mean score.
The critical factor contributing to this difference seems to be the specific attributes associated with dining satisfaction, particularly hygiene and comfort. These elements were presumably less satisfactory to local tourists, reflected in their lower mean scores. This suggests that local tourists have different expectations or standards regarding these aspects of dining compared to international tourists.
This outcome indicates the importance of understanding and catering to different tourist groups' varied preferences and expectations.
These findings could be instrumental in tailoring services and improving overall customer satisfaction (Rimmington & Yuksel 1998) in the hospitality and tourism industry (Hall & Mitchell 2001). It emphasises the need for a nuanced approach to evaluate and enhance the dining experience, considering the diverse perspectives of both international and local visitors.
---
CONCLUSION
In conclusion, the study outcomes emphasise an imbalance of experiences between local and international tourists in gastronomy (Leong et al. 2017) tourism in Malaysia. The local tourist group sustained a major disadvantage in gastronomic tourism compared to the international tourists, as evidenced by the result. This imbalance can be disheartening, indicating that local tourists cannot fully appreciate and enjoy Ismail et al.
their vacation within their homeland. In contrast, international tourists exhibit higher contentment and satisfaction with Malaysia's gastronomic (Mora et al. 2021) offerings despite the low number of respondents from various countries.
In light of these findings, the authorities in the tourism sector must address the root causes of these issues and brainstorm mitigative actions to correct this situation, thus providing an enriching gastronomic experience for all tourists.
---
DECLARATION OF CONFLICT OF INTERESTS
The authors have no conflict of interest. | 12,489 | 1,003 |
de614a16f0973ebb2ac4d6d37803aafb91a0b2c7 | Declining realisation of reproductive intentions with age. | 2,019 | [
"JournalArticle",
"Review"
] | STUDY QUESTION: What is the likelihood of having a child within 4 years for men and women with strong short-term reproductive intentions, and how is it affected by age? SUMMARY ANSWER: For women, the likelihood of realising reproductive intentions decreased steeply from age 35: the effect of age was weak and not significant for men. WHAT IS KNOWN ALREADY: Men and women are postponing childbearing until later ages. For women, this trend is associated with a higher risk that childbearing plans will not be realised due to increased levels of infertility and pregnancy complications.This study analyses two waves of the nationally representative Household, Income and Labour Dynamics in Australia (HILDA) survey. The analytical sample interviewed in 2011 included 447 men aged 18-45 and 528 women aged 18-41. These respondents expressed a strong intention to have a child in the next 3 years. We followed them up in 2015 to track whether their reproductive intention was achieved or revised. PARTICIPANTS/MATERIALS, SETTINGS, METHODS: Multinomial logistic regression is used to account for the three possible outcomes: (i) having a child, (ii) not having a child but still intending to have one in the future and (iii) not having a child and no longer intending to have one. We analyse how age, parity, partnership status, education, perceived ability to conceive, self-rated health, BMI and smoking status are related to realising or changing reproductive intentions.Almost two-thirds of men and women realised their strong short-term fertility plans within 4 years. There was a steep age-related decline in realising reproductive intentions for women in their mid-and late-30s, whereas men maintained a relatively high probability of having the child they intended until age 45. Women aged 38-41 who planned to have a child were the most likely to change their plan within 4 years. The probability of realising reproductive intention was highest for married and highly educated men and women and for those with one child.Our study cannot separate biological, social and cultural reasons for not realising reproductive intentions. Men and women adjust their intentions in response to their actual circumstances, but also in line with their perceived ability to have a child or under the influence of broader social norms on reproductive age.Our results give a new perspective on the ability of men and women to realise their reproductive plans in the context of childbearing postponement. They confirm the inequality in the individual consequences of delayed reproduction between men and women. They inform medical practitioners and counsellors about the complex biological, social and normative barriers to reproduction among women at higher childbearing ages. | Introduction
In many countries, childbearing is increasingly being postponed to later ages (Mills et al., 2011;Schmidt et al., 2012). Between 1975 and 2016, the median age of women who gave birth in Australia increased by more than 5 years from 25.8 to 31.2 years (ABS, 2017); similar increases took place in other highly developed countries (Sobotka, 2017). These shifts are also reflected in recent surveys of reproductive intentions where many women in their late 30s and early 40s report plans to have a(nother) child (Sobotka and Beaujouan, 2018). Childbearing at higher reproductive ages is linked to socioeconomic advantages for mothers and their children, including higher subjective well-being among mothers (Myrskylä and Margolis, 2014). However, it also comes with risks. Infertility increases rapidly for women in their mid-30s and older (Steiner and Jukic, 2016;Liu and Case, 2017). At age 40, one in six women are no longer able to conceive, increasing to more than half by age 45 (Leridon, 2008). Even when pregnancy is achieved, higher maternal age is a risk factor associated with perinatal mortality, low birth weight, pre-term births, maternal death, gestational diabetes, pregnancy-induced hypertension, severe preeclampsia and placenta previa (Balasch and Gratacós, 2012;Delbaere et al., 2007;Goisis et al., 2018;Huang et al., 2008;Bewley et al. 2005;Jacobsson et al. 2004;Schimmel et al., 2015).
When childbearing is delayed, women and men planning to have children are at increased risk of not realising their plans due to infertility or pregnancy loss (McQuillan et al., 2003;Greil et al., 2011;Schmidt et al., 2012;Habbema et al., 2015). Women often lack awareness of the potential difficulties of conceiving at later ages (Bretherick et al., 2010;Mac Dougall et al., 2013;García et al., 2018), and men display even less knowledge than women regarding fertility, age limits of reproduction and assisted reproductive technologies (Daniluk and Koert 2013). In part due to this lack of knowledge, many women and couples postpone childbearing until ages when it is more difficult to conceive and carry a pregnancy to term (Cooke et al., 2012;Birch Petersen et al., 2015).
Men are not subject to the same biological constraints as women, as their fertility starts declining later and at a slower rate (Fisch and Braun, 2005;de La Rochebrochard et al., 2006;Sartorius and Nieschlag, 2010;Kovac et al. 2013;Eisenberg and Meldrum, 2017). In addition, men tend to partner with women younger than themselves (Ortega, 2014). These biological and social differences between men and women imply that they also have different chances of realising their fertility plans later in life.
Research in European countries has identified a negative effect of age on intentions to have children and their realisation (Berrington, 2004;Roberts et al., 2011;Kapitány and Spéder, 2012;Spéder and Kapitány, 2013;Dommermuth et al., 2015;Pailhé and Régnier-Loilier, 2017). People in their late 30s and early 40s are also more likely to abandon previous plans to have children, and this is the case for both women and men (Spéder and Kapitány, 2009) status (Gray et al., 2013;Hayford, 2009;Iacovou and Tavares, 2011;Liefbroer, 2009). Partnered women and men usually display higher and more certain childbearing intentions, and they are also more likely to realise them (Spéder and Kapitány, 2013).
Fertility intentions also vary by parity (achieved number of children), and women who already have two children are much less likely to desire another child than those who are childless or have one child. However, women who already have children are more likely to achieve their fertility plans than childless women (Harknett and Hartnett, 2014;Dommermuth et al., 2015). Fertility intentions are also affected by socioeconomic status. Highly educated women are more likely to delay having children, and they are less likely to abandon childbearing plans compared with less educated women (Kapitány and Spéder, 2012), even though the end result is that they are more likely to stay permanently childless (Kreyenfeld and Konietzka, 2017;Neels et al., 2017).
Our study examines whether men and women who reach later reproductive ages are able to fulfil their short-term reproductive goals of having children in the near future and, if they have not achieved their goals, whether they abandon plans to have a child. We go beyond the existing research by focusing on (i) the age pattern of fertility realisation among those with strong short-term initial fertility intention (within the next 3 years), (ii) the age pattern of changes in reproductive intention and (iii) the differences between men and women. Our outcome variable has three mutually exclusive categories: realisation of intention by having a child, no longer strongly intending to have a child and still intending to have a child. Our multinomial regression models account for number of children, partnership status, education, perceived reproductive impairment, self-rated health, BMI and smoking status.
---
Materials and Methods
---
Data
We used a large representative longitudinal survey, the Household, Income and Labour Dynamics in Australia (HILDA) survey, conducted since 2001. In 2005, 2008, 2011and 2015, it included a subset of questions on desires and preferences for children as part of its incorporation in the international Generations & Gender Survey Programme (https://www.ggp-i.org/). We used data from the two most recent waves, 2011 and 2015. We identified respondents with short-term reproductive intentions in the 2011 wave and tracked whether their intentions were realised, resulting in a birth of a child, or whether they were abandoned or postponed by the 2015 wave. Attrition in this survey was particularly low (Summerfield et al., 2016): survey attrition specific to the age range under study was 16% for women and 19% for men (Table I). Longitudinal paired weights were used to partly compensate for any bias due to attrition. They are based on an initial cross-sectional weight for 2011 and then adjusted for attrition between the 2011 and 2015 sample (Watson 2012).
---
Study population
Of the original survey sample in 2011, we retained 447 men and 528 women with a strong short-term intention to have a child (Table I). Specifically, we selected men aged 18-45 and women aged 18-41 in 2011, who were present at both waves (2011 and 2015), who (or whose partners) were not pregnant and had not undergone a vasectomy or tubal ligation and who expressed a strong intention to have a child in the next 3 years as defined below. We also excluded men and women who did not answer the self-completed section, which included health and epidemiological characteristics.
---
Identifying individuals with a strong intention of having a child in the next 3 years in the 2011 wave
Uncertainty is an inherent part of reproductive intentions (Morgan, 1981;Ní Bhrolcháin and Beaujouan, 2019). Based on previous studies of fertility intentions and realisation (Toulemon and Testa, 2005; Régnier-Loilier and Vignoli 2011), we focused only on women and men who in 2011 expressed high certainty in their positive intention to have a child. We identified these individuals based on cumulative responses to three questions. First, we selected respondents who stated that they would like to have more children. Then we selected those with a high degree of certainty, as indicated by a score of 7 or higher on the 0-10 scale assessing respondents' perceived likelihood of having a child in the future. Finally, respondents were asked in which year they planned to have a child. We included respondents who stated that they planned on having a child in 2012, 2013 or 2014 and those who did not provide a specific year but said 'within the next 3 years'.
---
Relevant characteristics of the study population in 2011
The percentage distribution of men's and women's characteristics in 2011 is shown in Table II. Age is a categorical variable with the following age groups: 18-25, 26-28, 29-31, 32-34, 35-37 and 38-45 (38-41 as low, medium or high, based on the ISCED 7 categorisation of educational attainment. Respondents with a 'low' level of education did not complete high school, those with a 'medium' education completed high school and/or had a certificate or diploma and those with a 'high' level of education completed a university degree. Perceived ability to conceive identifies those who were aware of any physical or health difficulties that would make it difficult for them or their partner to conceive. We included self-rated health, BMI and smoking status due to their association with ability to conceive. Selfrated health distinguishes four groups: those who described their health as 'Excellent', 'Very good', 'Good' and 'Fair/poor'. The BMI variable has four categories: underweight, normal weight, overweight and obese. Underweight was rare (N = 13) and was combined with 'normal' weight. Smoking was measured as current daily smoker or not.
---
Outcome variable
After identifying respondents with a strong short-term intention to have a child in the 2011 wave, we followed them up in the 2015 wave to see whether their plans were realised ('Outcome variable' in Table II). We distinguished three outcomes: (i) respondent had a child by 2015, or they (or their partner) were pregnant in 2015 ('Realised intention by having a child'), (ii) respondent did not have a child and changed intention ('No longer intended to have a child') and (iii) respondent did not have a child but retained a strong intention to have one ('Still intended to have a child'). The fact that the two surveys were 4 years apart whereas fertility intentions were expressed for a 3-year horizon allows for time needed to achieve a pregnancy (Van Eekelen et al., 2017).
---
Statistical analysis
Multinomial logistic regression was used to determine whether age and the other relevant characteristics were associated with the outcome variable. In the case of an outcome variable with more than two levels, errors (Simonoff 2003, p. 429). The covariates include the characteristics of respondents in 2011 described above: age, parity, relationship status, level of education, perceived ability to conceive, BMI, self-rated health and smoking status. Models were run separately for men and women. We were interested in giving an aggregate-level account of the effects of age and other covariates on the realisation of intentions, rather than analysing the effects for the individuals of a change in the independent variable from one specified value to another (Mood 2010). We thus opted to present in the text predicted probabilities and confidence intervals for each variable, holding all other variables at their mean. The original coefficients of the analytical models ('Relative Risk Ratios' in Stata) are available in Supplementary Table SI. We tested the predictive power of each covariate for pairs of outcomes using chisquared tests as described in the note of that table. We also tested the overall significance of the introduction of each of the covariates in the models using global likelihood ratio (LR) chi-square tests (test to reject the hypothesis that the coefficients are simultaneously equal to zero for all the categories of the covariate and between all the levels of the response). In a multinomial logistic regression, such tests indicate the predictive power of the covariates for all the outcomes together, rather than for pairs of them. The results of these global tests are available in Supplementary Tables SII andSIII. Note that in multinomial models, individual category coefficients can be substantively and statistically significant even if the variable is overall deemed non-significant (Long & Freese, 2006). All statistical analyses were performed in Stata 14.2 (StataCorp, 2015).
---
Results
Overall, two-thirds of men (65%) and women (64%) had the child they had planned within the 4-year interval, and 12% of men and 13% of women changed their intention (Table II). Tables III andIV present the predicted probability and confidence intervals of the three outcomes, obtained from the multinomial logistic regressions for men and women.
The sociodemographic variables had a significant predictive power for discrimination (P < 0.05 on LR tests, except for male level of education where P = 0.052), whether on an empty model or on the model with all the other variables already introduced (Supplementary Tables SII andSIII). For both men and women, the predicted probability of realising their strong fertility intention declined with age (Tables III andIV). However, the decline was much steeper for women. For men, the estimated probability of having a child was highest at age 18-25 (73%), declining to 57% at age 38-45. For women, a steep decrease in the probability of having a child occurred from age 35 onwards, with estimated probabilities of realising intentions falling from 70% at age 29-31 to 61% at age 32-34, 48% at age 35-37 and 23% at age 38-41. There was a corresponding increase in changing plans to have children: in the oldest age group, by 2015, 42% were predicted to no longer strongly intend to have a child compared with just 5% at age 29-31. Surprisingly, more than one-third (35%) of women aged 38-41 in 2011 still intended to have a child when asked again in 2015 when they were aged 42-46. Men also more frequently changed their reproductive plans at later ages, but to a lesser extent than women.
Relationship status in 2011 strongly and significantly influenced the capacity to realise intentions within 4 years among both men and women. Married people were most likely to realise their intention (M: 77%, F: 74%), followed by people living in cohabiting relationships (64% for both sexes). Among single people (i.e. living without partner), 40% men and 45% women realised their intention; this seemingly high share is partly explained by changing relationship status: between 2011 (M: 73%, F: 72%). In contrast, those who had two children were the most likely to abandon further childbearing goals. Education level was positively related to achieving childbearing intention, with highly educated men and women most likely to have had a child (M: 73%, F: 71%). Low-educated women had significantly smaller predicted probability of realising their intentions than their more educated counterparts (51%). Finally, the epidemiological variables were related to the outcomes in the null model (model with no other covariate), except perceived ability to conceive in 2011 and self-rated health for men, but had no significant predictive power in the full model (Supplementary Tables SII andSIII). Men's predicted probability of having a child when they reported 'Excellent' health was 77%, in contrast to 60-64% for those with 'Very Good', 'Good', or 'Fair/poor' health (Table III). Nonetheless, our tests suggested that self-rated health was overall not an important determinant of the realisation of fertility intentions. Perceived ability to conceive, BMI and smoking status displayed no significant effects on the predicted probability of the outcomes once the demographic predictors were accounted for.
Women aged over 38 experience a strong biological fertility decline which possibly dominates all other factors. This may bias the coefficients of the other variables in the model. Therefore, we conducted a separate sensitivity analysis by excluding women aged 38+. This exclusion did not change significantly the effects observed in the model (results available upon request).
---
Discussion
Our study brings attention to the role of reproductive age in realising short-term reproductive plans: the analysis reveals a clear-cut contrast between men and women, which persists after controlling for other confounding variables. A majority of men and women who strongly intended to have a child in 2011 had achieved their reproductive plan within 4 years. However, we also found a strong age-related decline in achieving reproductive plans for women starting in their mid-30s, and a corresponding increase in revising plans to have children. In contrast, men in their late 30s and early 40s still maintained a relatively high probability of having the child they intended.
The strong age-related decline in intention realisation among women is consistent with the findings on age-related increase in infertility, sterility and pregnancy complications. Results are also consistent with the perceived social age deadlines for childbearing, where age 40 is often seen as a boundary after which women should not have children (Billari et al., 2011): many women abandon or revise their fertility plans when they approach this normative age limit. The limited age-related decline in intention realisation for men is likely due to their slower pace of reproductive aging (Kidd et al. 2001;Sartorius and Nieschlag 2010), their higher perceived social age deadline for childbearing (Billari et al., 2011) and the age difference within couples. Men tend to partner with younger women (Bozon 1991), with larger age differences found for men who partner at older ages (Ní Bhrolcháin and Sigle-Rushton, 2005;Beaujouan, 2011). The impact of age difference in partnering patterns between men and women should be explored further in future research.
For both sexes, partnership status was an important determinant of realisation of their reproductive plans. Men and women who did not live with a partner in 2011 had a lower likelihood of realising their initial fertility plans. At the same time, they were more likely to continue to intend to have children and half of them had partnered by 2015.
In Australia, as in most other highly developed countries, there is a strong two-child preference (Kippen et al., 2007), confirmed in our analysis: women and men with one child are the most likely to have a strong short-term fertility intention and to realise it. Highly educated men and women were most likely to realise their strong short-term intention within 4 years. As they have children later in life, this finding also reflects their awareness that they cannot wait much longer to realise their plans (Kreyenfeld 2002).
The epidemiological variables had explanatory power before controlling for the other variables, while only age, parity, relationship status and level of education remained significant in the full model. In sum, epidemiological factors appear less important than sociodemographic factors to explain the realisation of strong short-term fertility intentions. Surprisingly, perceived ability to conceive was not significantly associated with realising intentions. The HILDA survey data do not allow us to get deeper insights on this result; the data provide neither sufficient information on the use of reproductive treatments nor respondents' assessment about the reasons for not realising their intention. This points to a broader limitation of our study: while our data confirm the strong effect of age on the ability of women to realise their reproductive plans, we cannot distinguish the contribution of biomedical factors (especially infertility, miscarriages and poor health) from the one of socioeconomic and cultural influences, including the cultural norms about appropriate ages for childbearing. Another broader limitation pertains to sample size. Our analytical sample (M: 447, F: 528) was sufficient to identify the role of age, sex and other factors analysed here, but at times resulted in wide confidence intervals and did not allow more detailed analysis of interactions between age and other intervening variables.
Our study sheds light on the gender-specific role of age and other factors for realising reproductive intentions. As more women and men postpone having children until their late 30s and early 40s, they need to be aware of biomedical and other constraints and limitations that may prevent them from realising their reproductive plans. Our study confirms that this might be especially relevant for women of older reproductive ages: many in this study were postponing their childbearing plans and intending to have a child after age 40. For women with strong reproductive intentions, this study highlights the importance of not postponing childbearing to improve the chances of realising their plans (Habbema et al., 2015). Future research should shed more light on the contribution of men's and women's age for realising reproductive plans among couples and, using more waves of the survey when available, also study longer-term success, failures and changes in realising reproductive plans. Future surveys could also better capture the dynamics with which women and men facing reproductive difficulties either abandon their reproductive plans or seek treatment, and the extent to which this treatment helps them achieving their desired family size.
---
Reproductive intentions / fertility / reproductive aging / parental age / Australia / gender differences
---
Supplementary data
Supplementary data are available at Human Reproduction online.
---
Authors' roles
Éva Beaujouan initiated this research and made a substantial contribution to the design of the work and to the analysis and interpretation of data. She drafted the first version of the article and revised it critically. Anna Reimondos made a substantial contribution to the design of the work, to the acquisition of data and to the analysis and interpretation of data; she drafted the article and revised it critically. Edith Gray, Ann Evans and Tomáš Sobotka made substantial contributions to the design of the work and to the analysis and helped draft the article and revise it critically. All five authors approved the final version of the manuscript prior to publication.
---
Conflict of interest
None to declare. | 21,855 | 2,764 |
c777809de51886533ed4184ef06ab151ef25ff01 | Travma Bilgili Bakım Ölçeği: Türkçe Geçerlilik ve Güvenilirlik Çalışması | 2,023 | [
"JournalArticle",
"Review"
] | Bu çalışmanın amacı travma bilgili bakımla ilgili bilgi, tutum ve uygulama düzeyini ölçmek için geliştirilmiş olan travma bilgili bakım ölçeğinin gerekli analizlerini yaparak Türk kültürüne uyarlamaktır. Tarama modelindeki bu çalışmaya 161 ruh sağlığı meslek çalışanı katılmıştır. Araştırmanın verileri kolayda örneklem yöntemi ile Demografik Bilgi Formu ve Travma Bilgili Bakım Ölçeği kullanılarak toplanmıştır. Veriler online veri toplama platformu surveey.com aracılığıyla üretilmiştir. Çalışmaya dahil olan ruh sağlığı çalışanlarının çoğunun (%70,2) travma bilgili bakım modelini daha önce hiç duymadığı, %87'sinin de bu modeli uygulamalarında kullanmadığı saptanmıştır. Yapılan AFA analizi toplam varyansın %50,36'sını açıklayan ve bütün maddelerin orijinal ölçekteki alt boyutlarda yer aldığı 3 faktörlü yapının ortaya çıktığını göstermektedir. Yapılan analizler sonucunda ölçeğin toplam maddelerinden Tutum alt boyutunda yer alan 3 madde çıkarılmış ve Türk kültüründe kullanabilecek 18 maddelik son hali ortaya çıkmıştır. Yapılan korelasyon analizleri toplam puan ortalamasının bütün alt boyutlarla yüksek düzeyde ve pozitif yönde ilişkili olduğunu göstermektedir. Travma Bilgili Bakım Ölçeği travma mağduru danışanlarla çalışan ruh sağlığı meslek mensupları (hekimler, hemşireler, psikologlar, psikolojik danışmanlar, sosyal hizmet uzmanları) ile travma bilgili bakım ve/veya travma duyarlı bakımla ilgili çalışmalar planlayan araştırmacılar tarafından kullanılabilecek geçerli ve güvenilir bir ölçüm aracıdır. | Introduction
Discussions about what constitutes a psychically traumatic event have been going on for a long time. In the 19th century and the first half of the 20th century, the use of the term "trauma" was limited except for physical trauma. The idea that traumatic events other than physical harm can also cause problems emerged after the French and Prussian war in 1870 (Çolak et al. 2010). Substance Abuse and Mental Health Services Administration (2014) define trauma as an event or series of events that are emotionally disturbing or lifethreatening for an individual, or the lasting adverse effects of these events on the individual's mental, physical, social, emotional or spiritual well-being. While Diagnostic and Statistical Manual of Mental Disorders-III (DSM-III) (APA 1980) began to describe traumatic events as 'beyond the usual human experience...' in DSM-IV (APA 1994), the experience of helplessness, fear and horror and the threat of extinction in the face of action became the determinant of the traumatic event. In DSM-5 (APA 2013), on the other hand, the scope of all these has been expanded and the effect of the subjective experience of the person on the traumatic event has been eliminated, and the traumatic experience has been medicalized as an infectious disease and defined as a "standard" disease created by a single microorganism (Başterzi et al. 2019). According to DSM-5, trauma may occur when directly experienced or witnessed, experienced by a family member or close friend, or professionally experienced, facing death or serious injury, or being sexually assaulted (APA 2013). In DSM-5, the subjective reaction of the person is not taken into account, instead, ways of encountering events are listed in order to clarify the definition of traumatic event. According to DSM-V, the person himself may have experienced and witnessed the event or it may happen to a close friend or a close relative. The expression "physical integrity of self and others", which was in previous DSMs, was removed and for the first time the expression "sexual assault" was included (Çolak et al. 2010). The World Health Organization (WHO 1995) defines trauma as accident, natural disaster, fire, rape, harassment, exposure to blackmail, sudden death of a loved one, life-threatening illness, war, fraud, seeing a corpse, seeing someone injured or killed, home invasion, death. It has been defined through events such as being threatened, victimized by terrorism, physical violence/attack, divorce, and abandonment. This definition focuses on direct actions rather than psycho-social effects on the person. Terr (2003) first distinguished trauma between two types: Type I and Type II. Type I single-incident trauma results from a single event, such as a rape or witnessing a murder. Type II complex or repetitive trauma results from "repeated exposure to extreme external events." Survivors of Type II trauma generally have at least some memories of their experience. Trauma can occur due to extraordinary events such as violence and harassment, or it can be caused by ordinary everyday events. Regardless of how it occurs, trauma is generally the most avoided, ignored, belittled, denied, and untreated cause of human suffering (Levine and Kline 2014). While some traumas such as physical and sexual abuse, domestic violence, exposure to partner violence, rape, abuse and death are quite obvious, chronic experiences such as emotional neglect, a careless caregiver or a parent addicted to alcohol and drugs, being threatened are subtler and insidious. Most clients may experience different types of trauma that causes toxic stress and triggers complex trauma reactions (Cloitre et al. 2009). The level of being affected by trauma varies according to the gender, age, and psycho-social development of the individual. Existing vital risks such as substance abuse, disability, mental illness, the individual's strengths, and existing social support networks also affect the level of being affected by trauma (Ogden et al. 2006).
Trauma-related disorders, previously classified under anxiety disorders section, are classified under trauma-and stressor-related disorders in the DSM-V. Related disorders according to the new classification are: Reactive attachment disorder, acute stress disorder (ASD), post-traumatic stress disorder (PTSD), adjustment disorders (Ads), dissociative disorders (DDs). Environmental risk factors, including the individual's developmental experience, would thus become a major diagnostic consideration (Friedman et al. 2011, Koç 2018). In Classification of Diseases-11, a new classification has been made under the title of "especially stress-related disorders": Post-traumatic stress disorder, complex post-traumatic stress disorder, prolonged grief disorder, adjustment disorder, reactive attachment disorder, acute stress reaction (Maercker et al. 2013). Both DSM-5 and ICD-11 have included post-traumatic stress disorder (PTSD) among trauma-and stressor-related disorders. An important group of clients at the center of trauma-informed care consists of people with post-traumatic stress disorder. Trauma-informed care argues that traditional standard treatment models can trigger trauma survivors and exacerbate their symptoms. Trauma-informed programs are designed to be more supportive and avoid re-traumatization for people with post-traumatic stress disorder (SAMHSA 2014).
Trauma, in any case, does not influence everybody in the same way. While some people are not affected even though they have experienced very terrible events, those who witness it may be more affected. Traumatic response is profoundly individualized and molded by a wide extend of components. The trauma-informed care approach of professionals determines the course of the long-term effects of the traumatic event (Wilson et al. 2013). Trauma-informed care approach to care has evolved over the past 30 years from various streams of thought and innovation. Nowadays, it is practiced in a wide variety of environments, including mental health and substance abuse rehabilitation centers, child welfare systems, schools, and criminal justice institutions (Cohen et al. 2012). Although it is so common, trauma-informed care is not a "one approach fits all". Interventions should always be determined according to the individual situation of the client. Gender and type of trauma are some of the specific requirements that will determine the type of intervention (Kelly et al. 2014).
While there are similarities between trauma-informed care and trauma resolution therapy, the two are quite different. Trauma focused interventions can be a precursor to targeted therapy for many clients. Traumainformed care based practices help clients with traumatic experiences discuss their painful experiences and reduce their anxiety levels. This helps clients to regulate their emotions and behaviors (Cohen et al. 2012). Unlike classical theory and treatment methods, Trauma-informed care can be used by mental health professionals in Psikiyatride Güncel Yaklaşımlar-Current Approaches in Psychiatry 254 conjunction with any therapy. This method tries to understand the behaviors and coping mechanisms of traumatized clients and the problems caused by traumatic events. Trauma-informed care is a solution-oriented approach rather than a problem-oriented one (Tekin and Başer 2021). Trauma-informed care requires professionals working with clients with a trauma history to have a comprehensive knowledge of trauma. In addition, these professionals should have knowledge and awareness about the impact of trauma on the lives and actions of clients (Güneş Aslan 2022).
This study aims to adapt the Trauma Informed Care Scale to Turkish culture by conducting validity and reliability studies. Various scales (Kağan et al. 2012, Tanhan and Kayri 2013, Tekin and Kırlığolu 2021, Taytaş and Tanhan 2022) are available in the literature to be used in research on trauma in Turkey, however, there is no scale developed or adapted directly related to trauma-informed care, and of which validity and reliability studies have been conducted. Therefore, this study is very necessary and important in terms of meeting the need in the literature and the field.
---
Methods
---
Sample
This research is a survey model that aims to reveal the existing situation with a descriptive method without changing it. The population of the study consisted of mental health professionals (psychiatrists, social workers, psychologists, psychological counselors and psychiatric nurses) working with individuals with a trauma history. Since the number of the population is not known and this is a scale validity study, the sample calculation was made according to the number of scale items. For the 21-item scale study, it was planned to reach five times the number of scale items and 105 participants were determined as the minimum sample number. According to Tavşancıl (2002), the sample size should be at least five times the number of items in scale validity studies. Since the data of the study was collected through online platforms, participants from all over Turkey were included in the study. The study was completed with 161 participants by reaching more participants than the targeted minimum sample number. Inclusion criteria for the study: volunteering to participate in the study, being a mental health professional, working actively in the field for more than a year, and being able to speak and read Turkish. Exclusion criteria for the study: working in another job despite having a vocational diploma in the field of mental health, being in charge of another unit despite being a mental health professional have less than one year of professional experience. Also 17 participants who did not meet inclusion criteria were not included in the study.
---
Procedure
First of all, permission was obtained from the authors who developed the scale via e-mail. In addition, the opinion and approval of the author was received to replace the expression "patient" in the original items of the scale with the expression "client" used in the field of mental health. Prior to data collection, ethics committee approval was obtained from Necmettin Erbakan University Health Sciences Research Ethics Committee (Date: 06.04.2022 Number: 21/205). During the research, the rules of the Declaration of Helsinki were complied with. The participants of the study were informed that the research results could be used for scientific purposes and their written consent was obtained. This study was conducted by two competent researchers working in the field of clinical social work and behavioral psychology.
The data of the research were collected by convenience sampling method. Convenience sampling is the method that provides the easiest way to reach the sample representing the population (Gürbüz Şahin 2018). Participants representing the sample were reached through the peers of the researchers and their professional associations. In addition, research links were announced and shared in professional whatsapp, facebook and telegram groups. The data of the research were collected through the surveey.com data collection online platform. Repeated logins were blocked by setting IP and cookie control, a participant could only participate in the study once. Information about the purpose and scope of the study was given to the participants on the entrance page of the data collection online platform, and it was assumed that the participants who gave their consent by clicking the "participate in the study" button participated in the study voluntarily.
---
Language Validity
The translation of the scale items from the original language into the language of the culture to be adapted from the cultural adaptation studies of the scales is an important step. Therefore, for the language validity of the scale, the scale items that were originally in English were translated into Turkish. During the language validity process, the items of the scale, originally called Trauma Informed Care, were translated into Turkish by two different sworn translators. At least two independent translators are required at the language validity stage (Aksayan and Gözüm 2002). Then, an academician translation form was prepared with the English scale items and their Turkish translations. This form was sent to a total of six academicians, three of whom are social workers and three psychologists, who have studies on trauma. The corrections from these academics were compared by the researchers and the Turkish version of the scale was created by adopting the translations that were thought to best express the item in question. This scale was applied to 20 participants for the pre-application study, the questions that were thought to be incomprehensible were reviewed and the final version of the scale to be used in the main study was created.
---
Data Collection Tools
The data of the study were obtained by using the "Demographic Information Form" and the "Trauma Informed Care Scale".
---
Demographic Information Form
The descriptive features form created by the researchers, consists of 9 questions that determine the gender, age, education, occupation, duration of professional experience, knowledge of trauma-informed care, use of traumainformed care in occupational interventions, training on trauma-informed care, and need for training on trauma-informed care.
Trauma Informed Care Scale (TICS)
The scale was developed by King et al. (2019) and consists of 21 items and 3 subscales: knowledge, attitude and practice. There are 6 items about "Knowledge", 9 items about "Attitude" and 6 items about "Practice". There is no reverse item in the scale. The scale enables the determination of trauma-informed care related knowledge, attitude and practice levels of mental health professionals working with individuals with a trauma history.
Scoring of the five-point Likert type scale is Strongly Disagree (0), Disagree (1), Undecided (2), Agree (3), Strongly Agree (4). Although the scale does not have a cut-off score, the high score indicates the need to learn about trauma-informed care. As a result of the validity study conducted with 592 healthcare professionals, confirmatory factor analysis of the scale revealed that 21 items provided the strongest internal consistency reliability for the general tool and each factor. The Cronbach Alpha value of the scale was 0.86, the knowledge subscale was 0.84, the attitude subscale was 0.74, and the practice subscale was 0.78 (King et al. 2019).
---
Statistical Analysis
The data obtained in the research were analyzed using the SPSS (Statistical Package for Social Sciences) for Windows 22.0 program. Before the analysis, skewness and kurtosis values, histograms and Q-Q plots were examined to assess whether the data set was normally distributed. Skewness and Kurtosis values ranged from -1 to +1. This result indicated normal distribution. Additionally, histograms and Q-Q plots also showed each of the variables was normally distributed. Frequency analysis, correlation, explanatory factor analysis, and reliability analysis were used for data analysis. Pearson correlation coefficient was preferred because the scale was a Likert-type interval scale, the data were normally distributed, and the sample size was sufficient. In addition, Bartlett's Test of Sphericity (BTS) was used for the significance of correlation coefficients between variables. Cronbach's Alpha coefficient was calculated for the reliability of the scale. Since it was a scale adaptation study, only EFA (Exploratory Factor Analysis) was considered sufficient, and CFA (Confirmatory Factor Analysis) was not considered necessary. Possible patterns that may occur can be revealed more clearly in EFA. Structures that cannot be noticed in CFA can be discovered via EFA. For this reason, possible changes that may occur in the structure in adaptation studies can be easily understood with the help of EFA (Orçan 2018).
---
Results
The sample of this study consisted of a total of 161 mental health professionals, 102 (63.4%) female and 59 (36.6%) male, aged between 21 and 60 (Mean = 33.16 ± 8.72). Of the participants, 38 (23.6%) were psychiatrists, 43 (26.7%) were psychologists, 37 (23%) were psychological counselors, and 43 (26.7%) were social workers. 90 (55.9%) of the participants were undergraduates, 51 were graduates, and 20 (12.4%) were PhD graduates. When examined in terms of professional experience, the highest rate was composed of those who worked for 1-3 years (32.9%, n=53) and those who worked for more than 10 years (32.3%, n=52). When the ratio of the participants was analyzed in terms of the institution they worked for, it was found that the highest rate was formed by the participants working in the ministry of health (37.9%, n= 61%). The findings regarding the demographic characteristics of the participants are provided in Table 1.
While 48 (29.8%) of the participants stated that they had heard of the concept of trauma-informed care before, 21 (13%) stated that they used the trauma-informed care model in the professional intervention process. In addition, 23 (14.3%) participants stated that they received training on trauma-informed care during their undergraduate education, while 101 participants (62.7%) stated that they needed training on trauma-informed care. The opinions of the participants about trauma-informed care are provided in In order to test the construct validity of the Trauma Informed Care Scale, EFA with principal components method was conducted using Varimax rotation with Kaiser Normalizer. EFA is a statistical analysis method that is frequently used for social science studies to determine the hidden factors underlying the observed variables (Orçan 2018). When the results of the Barlett sphericity test was examined, it was revealed that that the data met the sphericity assumption (χ² (210)= 1151.34, p < .001). As a result of the analysis, a three-factor structure with a KMO (Kaiser-Meyer-Olkin) value of 0.75, explaining 44.90% of the total variance, and an eigenvalue above 1 was obtained. However, the items "Recovery from trauma is possible", "Paths to healing/recovery from trauma are different for everyone" and "Informed choice is essential in healing/recovery from trauma" were excluded from the analysis because they were not included in the original subscale and had a load below 0.32 and analyzes were repeated. The results indicated that a 3-factor structure was emerged, which explained 50.36% of the total variance and included all items in the subscales of the original scale. As a result of the analysis, 3 items in the attitude subscale were removed from the total items of the scale and the final version of 18 items scale that could be used in Turkish culture has been created. The correlation analyzes indicated that the total mean score highly and positively correlated with all subscales. The internal consistency values of the scale were also examined. Cronbach's alpha coefficient is a reliability value that indicates whether the scale items are related to the characteristic to be measured. It provides information about how consistent the scale items with each other and how coherent a group they form (Büyüköztürk 2010). The Cronbach Alpha internal consistency coefficient of the scale was calculated as 0.81 for the practice subscale, 0.72 for the knowledge subscale, and 0.82 for the attitude subscale. The Cronbach Alpha internal consistency coefficient value calculated for the whole scale is 0.80. Findings are provided in Table 3. Practice There is a connection between mental health issues and past traumatic experiences or adverse childhood events. Pearson correlation analysis was conducted to examine the relationships between the total mean score and subscales of the Trauma Informed Care Scale. The results obtained demonstrated that the total score average was positively and highly correlated with all subscales (Information: r = 0.66, p < .001; Attitude: r = 0.71, p < .001; Practice: r = 0.73, p < .001). There were positive and low relationships between knowledge and attitude (r = 0.35, p < .001), between knowledge and practice (r = 0.20, p <.001), and between attitude and practice (r = 0.20, p < .001). The findings are provided in Table 4.
---
Discussion
The main purpose of this study was to adapt the knowledge, attitude and practice level measurement tool related to trauma-informed care developed by King et al. (2019) into Turkish and to prove its validity and reliability with scientific methods. As a result of the studies, Cronbach's alpha reliability coefficients calculated for both the total scale and subscales were found to be at satisfactory levels. The practice subscale was found to be 0.81, the knowledge subscale was 0.72, the attitude subscale was 0.82, and the Cronbach's alpha for the total scale was 0.80. The fact that the internal consistency coefficient is above 0.70 indicates that the scale has a very high reliability (Büyüköztürk 2010). Cronbach's alpha values obtained in the original study of the scale were 0.84 for knowledge, 0.74 for attitude and 0.78 for practice (King et al. 2019). According to the results of the EFA analyzes, the KMO value was 0.75 and the Bartlett test χ² value was found to be 1151.34 (p < .001). If the KMO value is between 0.5 and 0.7, it is considered normal, and between 0.7 and 0.8 it is considered good (Hutcheson and Sofroniou 1999). The BTS value should meet the condition of being significant at p<.05 level (Alpar 2020). The significance level of this study was found to be p < .001. The results showed that the sample and the scale were suitable for factor analysis. It was observed that the Turkish version of the scale was three-dimensional, as in the original, and it explained 44.90% of the variance regarding the feature measured by the three-dimensional scale. A high explained variance can be interpreted as an indicator that the related concept or construct is measured well (Büyüköztürk 2007). In addition, the eigenvalue results (Alpar 2020), which can be used as an indicator of how many factors the scale should consist of, show that it may be appropriate to use a 3-factor structure. The factor loadings of the items ranged between 0.43 and 0.85. Factor loadings above 0.30 indicate a strong construct validity (DeVellis 2017). The results showed that the scale met the validity criteria. However, the items "Recovery from trauma is possible", "Paths to healing/recovery from trauma are different for everyone" and "Informed choice is essential in healing/recovery from trauma" were excluded from the analysis because they were not included in the original subscale and had a load below .32 and analyzes were repeated. The results showed that a 3-factor structure was emerged, which explained 50.36% of the total variance and includes all items in the subscales of the original scale. As a result of the analysis, 3 items in the Attitude subscale were removed from the total items of the scale and the final version of 18 items scale that could be used in Turkish culture has been created. In the final study of the 28-item scale model created in the original study of the scale, a total of 7 items were removed, 5 items from the knowledge subscale and 2 items from the attitude subscale, and the final 21-item model of the scale was created (King et al. 2019). The fact that the three items in the original of the scale were removed after the analysis can be explained with the assumption that the related items cannot find meaning in Turkish culture.
Pearson correlation analysis revealed that the correlation between knowledge and attitude sub-dimensions was 0.35, the correlation between knowledge-practice sub-dimensions was 0.20 and the correlation between attitude-practice sub-dimensions was 0.20 In the original study by King et al. 2019, the correlation coefficient between knowledge-attitude sub-dimensions was 0.55, the correlation coefficient between knowledge-practice sub-dimensions was 0.28, and the correlation coefficient between attitude-practice sub-dimensions was 0.65.
Compared to the original study, the correlation values were relatively lower in this study. In particular, the correlation between attitude and practice sub-dimensions was much weaker than in the original study. However, all correlations were positive and significant as in the original study. The findings obtained from our study overlap with the findings obtained from the original study of the scale.
An important limitation of the study is that the research data was collected online. The findings obtained from the research are limited to the answers given by the mental health professionals participating in the research.
Research results can be generalized to the mental health professionals involved in the study. Additionally, although all participants of the study were mental health professionals, this does not mean that they have the same level of experience with trauma. Researchers should take this into account when evaluating the study. Finally, since the original version of the scale did not have validity and reliability studies for other cultures, the comparison of the findings obtained from this study was limited to the findings of the original study.
---
Conclusion
As a result of the statistical analyzes, the validity and reliability of the Trauma Informed Care Scale has been proven in the light of scientific data. With this study, a scientific measurement tool that will enable the determination of the knowledge, attitude and practice levels of healthcare professionals working with individuals with trauma history has been brought to the literature. The Trauma Informed Care Scale is a valid and reliable measurement tool that can be used by professionals (physicians, nurses, psychologists, psychological counselors, social workers) working with trauma survivors, and researchers planning studies on traumainformed care and/or trauma-sensitive care.
---
Conflict of Interest:
No conflict of interest was declared. Financial Disclosure: No financial support was declared for this study.
---
Addendum-1. Trauma Informed Care Scale (Turkish Version)
Trauma Informed Care Scale Turkish Version (Travma Bilgili Bakım Ölçeği) Instruction: This scale measures the level of knowledge, attitudes and practices of mental health professionals working with trauma victimized clients regarding trauma-informed care. It is scored as Strongly Disagree (0), Disagree (1), Neutral (2), Agree (3), Strongly Agree (4). Please mark the most appropriate option for you. 0 1 2 3 4 1. Travmaya maruz kalmak yaygındır. 2. Travma fiziksel, duygusal ve zihinsel sağlığı etkiler. 3. Madde kullanımı sorunları, geçmişteki travmatik deneyimlerin veya olumsuz çocukluk yaşantılarının göstergesi olabilir. 4. Ruh sağlığı sorunları ile geçmiş travmatik deneyimler veya olumsuz çocukluk yaşantıları arasında bir bağlantı vardır. 5. Güvensiz davranış, geçmiş travmatik deneyimlerin veya olumsuz çocukluk yaşantılarının göstergesi olabilir. 6. Travma istemsiz bir şekilde tekrarlayabilir. 7. İnsanlar kendi travmalarını toparlama ve iyileştirme konusunda uzmandırlar. 8. Danışanlarımız ve aileleriyle etkin bir şekilde çalışmak için travma bilgili uygulama önemlidir. 9. Travma bilgili uygulama hakkında kapsamlı bir anlayışa sahibim. 10. Travma bilgili uygulama ilkelerine inanıyor ve bunları destekliyorum. 11 Travma bilgili uygulama hakkında uzmanlığımı meslektaşlarımla paylaşıyor ve onlarla etkin bir şekilde işbirliği yapıyorum. 12.Travma bilgili uygulama konusunda daha fazla eğitim almak istiyorum. 13. Danışanlarla olan tüm etkileşimlerde şeffaflığı koruyorum 14. Danışanlara seçenekler sunuyorum ve kararlarına saygı duyuyorum 15. Danışanların ve meslektaşlarımın kendi güçlü yanlarını fark etmelerine yardımcı oluyorum. 16. Çalışmalarıma başlamadan önce tüm danışanları bilgilendiririm. 17. Her danışanla olan etkileşimim benzersizdir ve onların özel ihtiyaçlarına göre uyarlanmıştır 18. Öz-bakım yapıyorum (kendi ihtiyaçlarım ve sağlığımla ilgileniyorum).
---
Scoring Subscale
Items Knowledge 1,2,3,4,5,6 Attitude 7,8,9,10,11,12 Practice 13,14,15,16,17,18 | 28,263 | 1,518 |
ac97fe0d6b2de1c9d23bd12e17553a06386acf1e | Community Learning Centres (CLCs) for Adult Learning and Education (ALE): development in and by communities | 2,022 | [
"JournalArticle",
"Review"
] | Institutionalised forms of adult learning and education (ALE) such as community learning centres (CLCs) and related models are found in most parts of the world. These are spaces offering opportunities for literacy and skills training, health and citizenship, general, liberal and vocational education, in line with fuller recognition of the meaning of lifelong learning, and in the context of local communities. Often these institutions form the basis for even more informal and participatory learning, like study circles and community groups. They may share facilities like libraries and museums, clubs and sports centres, which are not within the remit of the Ministry of Education. This article reviews relevant literature and identifies recent studies and experiences with a particular focus on the Asia-Pacific and Africa regions, but also considers insights related to interventions at the global level. Findings point to low levels of participation of adults in general, and more specifically so for vulnerable and excluded groups which can hardly cross respective barriers. The authors' discussion is guided by the question What conditions are conducive to having more and better ALE for lifelong learning -and which roles can CLCs and other community-based ALE institutions play? This discussion is timely -the authors argue that CLCs need to be given more attention in international commitments such as those made in the context of the International Conferences of Adult Education (CONFINTEA) and the United Nations 17 Sustainable Development Goals (SDGs). CLCs, they urge, should be part of transformative discourse and recommendations at CONFINTEA VII in 2022. | Introduction
Adult learning and education (ALE) is currently gaining in importance in a policy discourse which looks at the human right for the future of education through the lens of lifelong learning (LLL) (Elfert 2019;UIL 2020;ICAE 2020). This paradigm shift calls for lifelong learning for all, and that includes ALE for all youth and adults. To better understand what this right entails, A review of Entitlement Eystems for LLL (Dunbar 2019), prepared for the International Labour Organization (ILO) and the United Nations Educational, Scientific and Cultural Organization (UNESCO), translates this as an entitlement for all adults at work and analyses the situation in sixteen countries,1 documenting achievements using a system of four stages. These stages range from the declaration of a commitment to lifelong lerning (stage 1), across the declaration of an entitlement to lifelong learning (stage 2), and implementing elements of a lifelong learning entitlement (LLLE) (stage 3) to successful fulfilment of an entitlement to lifelong learning (stage 4) (ibid.).
Extending such entitlement to all those working in the informal economy includes an additional two billion people worldwide, many of whom are "three times more likely to have only primary education (as the highest level of education) or no education as compared to workers in the formal economy" (Palmer 2020, p. 4). Thus, in the face of a reality where educational governance is dominated by the formal sector of education, a structural transformation of current institutions and systems is needed urgently (ibid., p. 49).
If lifelong learning for all is to be achieved, increasing the participation of youth and adults in ALE is highly important. This calls for a closer look not only at all face-to-face and digital opportunities, but also for an analysis of the diversity of institutions and providers of ALE. In this context, our particular interest here is in community learning centres (CLCs) as they have increased in numbers and geographic spread, serving a growing number of people over the past three decades. Indeed, policymakers, as well as the wider "policy community" at all levels, are increasingly using CLC as a generic term to capture a variety of community-based places of adult learning (e.g. Ahmed 2014;Yamamoto 2015;Chaker 2017;Le 2018;Rogers 2019).
CLCs have also received attention and become a concern in the global monitoring of education, training and learning. UNESCO's Fourth Global Report on Adult Learning and Education (GRALE 4) suggests throwing the net even wider: "While CLCs have been in the foreground of the discussions on institutional infrastructure, little attention has been given to traditional popular/liberal adult education institutions" (UIL 2019, p. 165). The latest Global Education Monitoring Report (GEM) 2021/2022 on Non-state actors in education: Who choses? Who loses? opens up the relevant section by stating:
Community learning centres (CLCs) are increasingly recognized as playing an important role in providing education opportunities meetings local communi-ties´ needs (UNESCO 2021, p. 265).
In this article, we take a closer look at some of these aspirations and developments through the lens of ALE's local, national and global dimensions. Our discussion is guided by the question: What conditions are conducive to having more and better ALE for lifelong learning -and which roles can CLCs and other communitybased ALE institutions play? We are particularly interested in the conditions that promote and support CLCs to live up to the expectations of participants, providers and stakeholders -and how local, national and global recommendations and initiatives could help to improve conditions, including levels of institutionalisation and professionalisation.
In the following sections we look at how CLCs and other forms of institutionalised community-based ALE emerged. We investigate why this seems to be important both for current policy discourses within countries as well as for the global development agenda with its 17 Sustainable Development Goals (SDGs). While the fourth of these goals (SDG 4) specifically concerns education, aiming to "Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all" (WEF 2016, p. 15), and, moreover, adopting an inclusive stance in ensuring that "no one is left behind" (ibid., p. 7), it has been noted that ALE is in fact still being left behind in the implementation of SDG 4 and its lifelong learning agenda (UIL 2017(UIL , 2019)). ALE continues to remain on the margins as "the invisible friend" of the SDGs (Benavot 2018), although actually education and learning opportunities for youth and adults have the potential to support most of the other 16 SDGs (Schweighöfer 2019) and should be recognised for playing an important role in transformation and sustainability (Schreiber-Barsch and Mauch 2019).
---
Literature review
Local places in local communities, including "centres" where people learn together, exist in many corners of the world. Centres where adults gather to learn carry many names and are provided by numerous providers in diverse settings. Some are government-supported and/or otherwise funded institutions for formally planned and accredited education and training. Others are created for other purposes and have been adapted and possibly renamed for different kinds of organised instruction. Some may be more diverse and flexible as locally determined and managed forms of learning, complementing facilites set up for some other purpose such as teaching about health, farming and animal husbandry, workers' rights, or reaching particular groups of learners such as women or those retired from paid employment. But what constitutes such ALE institutions as CLCs is less clearly known.
---
Terminology and infrastructure
We begin this literature review by considerering the variety of terms used in different countries. They are highly influenced by historical and cultural contexts. Language matters here, since a literal translation of the English term "community learning centres" and its acronym CLCs is found hardly anywhere in Eastern Europe, Latin America or francophone Africa. We also have to take into account that a term may point specifically to an institution while at the same time having some overarching meaning. For the purpose of this article, we use "community learning centre" or "CLC" as more of a generic term where we know of its origin, its original definition and later adaptation. We complement this with the wider term "community-based learning centres" for institutions with longer or shorter traditions. This seems appropriate, as does our use of "adult education" as a generic term complemented with the broader "adult learning and education" or "ALE" to reflect the changing understandings of lifelong learning, which is also life-wide and life-deep.
Terms, traditions and trajectories of ALE institutions vary between and within communities, countries and world regions. So do commonly used names and providers which in a broader perspective of community-based institutions of learning opportunities for adults include folk high schools in the Scandinavian countries (Bjerkaker 2021), Volkshochschulen in Germany (Lattke and Ioannidou 2021), adult education centres in Georgia (Sanadze and Santeladze 2017), but also in Belarus and Ukraine where such centres are attached to the "houses of culture" run by the city council (Lukyanova and Veramejchyk 2017;Smirnov and Andrieiev 2021). In Japan, there are the kominkan (Oyasu 2021) and Bangladesh has people´s centres (Ahmed 2014). In Mongolia, former non-formal education (NFE) or "enlightenment" centres are now referred to as lifelong learning centres (LLCs) (Duke and Hinzen 2016), while in the Republic of Korea the former community learning centres (CLCs) have also been renamed lifelong learning centres (LLCs), to reflect their designation as local institutions for the Korean national lifelong learning system (Choi and Lee 2021). In Tanzania, there are folk development colleges (Rogers 2019); South Africa has public adult learning centres (PALCs) (Daniels 2020), and Bolivia has "alternative education centres" (Limachi and Salazar 2017).
In a number of countries, these centres have got together and built national associations or networks which provide opportunities for cooperation and support services. Examples are the Georgian Adult Education Network (GAEN), the National Network of Alternative Education Centres (REDCEA) in Bolivia, the adult education centres of the Afghan National Association for Adult Education (ANAFAE), the National Network of Folk Universities in Poland (Hanemann 2021, pp. 53, 55), and the National Kominkan Association in Japan (Oyasu 2021). In Germany, the Deutscher Volkshochschul-Verband (DVV) serves as the national umbrella organisation for its regional member associations and the Volkshochschulen as local centres (Hinzen and Meilhammer 2022).
---
Europe
In Europe, the early beginnings of modern ALE and its institutionalisation can be traced back to the Enlightenment era, especially in Scandinavia where the folk high school movement of today looks to Frederik Severin Grundtvig as a founding father (Bjerkaker 2021). More vocational training-oriented activities and programmes grew out of needs coming from the agricultural and industrial revolution and often embedded in working-class movements and education. In Great Britain, the campaign and research around the centenary of the 1919 Final Report of the Adult Education Committee emphasised the importance of ALE after World War I (Holford et al. 2019) as a form of workers' political and economic education. In Germany, ALE became a constitutional matter in 1919, with a special paragraph stipulating that "the popular education system, including the adult education centres, shall be promoted by the Reich, the federal states and the municipalities" (Lattke and Ioannidou 2021, p. 58). The need to support ALE in institutions was recognised as a governmental obligation. It seems that there are similarities and differences in historical evolution between Britain and Germany (Field et al. 2016), across Europe, and indeed globally.
For all the wealth of ALE and local learning centres under different names worldwide (Avramovska et al. 2017;Gartenschlaeger 2017), Europe developed a rich tradition of community-based learning, often closely connected to voluntary endeavour at a time of major changes. The general movement was related in time and cause to industrialisation, followed by political democratisation, with the need for new skills, attitudes and conduct in new industrial, technical, economic and social conditions. The kinds and levels of state support to voluntary endeavour varied, but all saw partial devolution to local communities, often with activities and institutions to what today is called citizenship education (Hinzen et al. 2022).
To some extent, Volkshochschulen (vhs) might be called a German version of CLCs (Hinzen 2020). In Germany today, ALE governance includes policy, legislation and financing for the almost 900 vhs which provide services to participants on their doorstep through offering courses, lectures or other activities, which are taken up at an annual level of around 9 million enrolments. Aggregated statistics showing data on institutions, participants, staff, courses, finances etc. have been collected and disseminated through the German Institute for Adult Education (DIE) -Centre for Lifelong Learning of the Leibnitz Society for the past 58 years and are available for further analysis and research (Reichart et al. 2021). Longitudinal studies show changes in content and offerings in terms of of vhs supply and demand, especially at times when socio-political developments require the acquisition of new competencies and skills, attitudes and values in the education and training of adults (Reichart 2018). Access and inclusion are key issues, giving special attention to respective policies and supporting barrier-free opportunities for youth and adults with disabilities; or providing targeted funding for equal chances in health education services (Pfeifer et al. 2021). These are areas of particular concern when monitoring ALE participation and non-participation (Stepanek Lockhart et al. 2021).
---
North America
The term community learning centres, as well as the aronym CLCs, is also used in North America for initiatives in educational reform. In Canada, the Government of Québec provided support and, in 2012, published a CLC "resource kit" for "holistically planned action for educational and community change". This was prompted by debates on reforming schools and training centres to better "respond to the particular culture and needs of the communities" they were serving and to "provide services that are accessible to the broader community" (Gouvernement du Québec 2012, pp. 2, 4). The framework for action underlying this resource kit understands the CLC as an institutional arrangement aiming to jointly engage children, youth and adults in developing their community and catering for the needs of the its members. In the United States, a similar debate using the term community learning centres is ongoing and keeps asking how schools can be improved through engagement of the communities they operate in, and also how the communities can benefit from such engagements (Lackney 2000;Jennings 1998;Penuel and McGhee 2010;Parson 2013).
---
Other world regions
The orientation and understanding of CLCs and related facilities is widened by Hal Lawson and Dolf van Veen (2016) through a variety of international examples. The most recent collection of experiences from more than twenty countries around the globe is by Fernando Reimers and Renato Opertti (2021); it includes a case study from Mexico on "Schools as community learning centers" (Rojas 2021). All of these examples and their findings are relevant to our discussion of community-based ALE through CLCs which have adults as their main participants, but often also provide opportunities for children and youth, including examinations for school leaving qualifications as second-chance opportunities (Lattke and Ioannidou 2021, p. 60).
In sum, and keeping in mind our guiding question about conditions conducive to improved and enlarged ALE development, with particular focus on the role of institutions like CLCs, this literature review so far suggests that the need for wider participation in ALE is situated in a landscape featuring a variety of community-based ALE institutions with diverse backgrounds using different terms, including CLCs. However, while this landscape is bound to offer considerable potential for increasing participation in education, training and learning opportunities among adults so far not participating, there is also a need to search for and understand barriers and hindrances to participation, and identify those conditions which provide more ALE opportunities and make up better institutions. This is where ALE practice-related work and materials are getting increased attention. Examples are the Curriculum globALE (DVV International et al. 2021), tailor-made for the training of adult educators and staff, and the Curriculum institutionALE (Denys 2020), designed for organisational development and ALE system building (Belete 2020).
Furthermore, Richard Desjardins and Alexandra Ioannidou's study on "some institutional features that promote adult learning participation" (Desjardins and Ioannidou 2020, p. 143) is of interest to us, complemented by this observation, made in GRALE 4:
On the supply side, it is clear that a strong, universal ALE system is linked to relatively high levels of equality in participation. Within this, there is abundant scope for targeted initiatives that are designed to reach out to underrepresented groups and reduce institutional barriers to participation (UIL 2019, p. 176). This is where CLCs and other institutions of community-based ALE could and should strive to play an important role.
Finally, we point to related discourses concerning expectations of CLCs beyond the usual claims. In the context of learning cities or learning regions, for example, Manzoor Ahmed asks: "Are community learning centres the vehicle?" (Ahmed 2014, p. 102). Or in the context of education for sustainable development, where Hideki Yamamoto positions CLCs as a "platform for community-based disaster preparedness" (Yamamoto 2015, p. 32). In a related vein, the dimensions of local solutions to the climate crisis for Indigenous minorities in Malaysia are exemplified by Mazzlida Mat Deli and Ruhizan Muhamad Yasin in their article entitled "Community-based learning center of renewable energy sources for Indigenous education" (Deli and Yasin 2017). Such wider perspectives were intensively discussed during an international conference on adult education centres which suggested making use of CLCs as local hubs for the implementation of the SDGs (DVV International 2017). This is close to the late Alan Rogers' interesting analysis of "Second-generation non-formal education and the sustainable development goals: Operationalising the SDGs through community learning centres" (Rogers 2019), with the first generation of non-formal activities and institutions being situated back in the 1970s (Coombs and Ahmed 1974).
Having concluded our literature review, we now turn our attention to examples of CLCs in Asia and Africa, considering their development in and by communities.
---
Experiences and examples from Asia and Africa
There are several reasons why we focus here on examples and developments from the Asian and African regions more extensively than on other continents. In the case of Asia there is diversity in terms of how long CLCs have been operating for, and in directions and modes of development. The examples from Africa do not have decades of such development; they are part of current policy interventions dating back only a few years, albeit based on previous experiences. It is worth noting that the combined populations of these two continents (around 5.3 billion people; UNFPA 2022) amount to almost three-quarters of the world population (ibid.). Many countries in Asia and Africa have higher numbers of non-literate adults and out-of-school children and youth than those in other world regions. This increases the need for ALE participation in relevant institutions like CLCs, their institutionalisation and professionalisation. While we have to accept that limited data are available for ALE and CLCs globally, data are available for Asia, and in Africa some innovative developments supporting CLCs are grounded in broader approaches to ALE system-building.
---
The Asia Pacific region: Viet Nam, Thailand and Japan
In 1998, the UNESCO Regional Office in Bangkok started a CLC project as part of its Asia Pacific Programme of Education for All (APPEAL) (UNESCO Bangkok 2001). It was planned as an attempt to reach those "with few opportunities for education", and based on this definition of a CLC:
A community learning centre (CLC) is a local place of learning outside the formal education system. Located in both villages and urban areas, it is usually set up and managed by local people in order to provide various learning opportunities for community development and improvement of the quality of life. A CLC doesn't necessarily require new infrastructure, but can operate from an already existing health centre, temple, mosque or primary school (UNESCO Bangkok 2003, p. 2).
The project spread across many countries in the region, and by 2003 Bangladesh, Bhutan, Cambodia, China, India, Indonesia, Iran, Kazakhstan, Lao PDR, Malaysia, Mongolia, Myanmar, Nepal, Pakistan, Papua New Guinea, the Philippines, Samoa, Sri Lanka, Thailand, Uzbekistan and Viet Nam were mentioned as participating (UNESCO Bangkok 2003, p. 3). APPEAL provided a resource kit (UNESCO Bangkok 2006) and followed up with manuals, partner meetings and conferences. Cambodia developed cooperation with a French non-governmental organisatoin (NGO) and produced their own guide on managing CLC (ACTED 2018). At a regional meeting of APPEAL held in 2012, a new CLC definition emerged:
A Community Learning Centre (CLC) is a community-level institution to promote human development by providing opportunities for lifelong learning to all people in the community (ACTED 2018, p. 1, referring to UNESCO Bangkok 2013).
The orientation towards lifelong learning for all is growing. The increase in diversity within and between countries ever since the beginning of the APPEAL project can be seen in a collection entitled Community-Based Lifelong Learning and Adult Education: Situations of Community Learning Centres in 7 Asian Countries (UNESCO Bangkok 2016).
The reasons for achievements and success seem to be manifold, including the harmony between programmes and local needs, lifestyles and strong government support. Ai Tam Pham Le provides an interesting case study for Myanmar, where she discusses the contributions of CLCs to personal and community development (Le 2018). In Indonesia, the CLC manages the non-formal education programme (Shantini et al. 2019), and in Nepal CLCs are seen as supporting lifelong learning and are now part of national education plans (MoE Nepal 2016).
In this article, we present examples from Viet Nam, the country with the highest number of CLCs in Southeast Asia; Thailand, which has diverse CLC organisations; and Japan, with its own pre-CLC kominkan. Thes three country cases serve to describe some of the circumstantial similarities and differences in which CLC developments emerged and co-existed with other forms of community-based ALE.
---
Viet Nam
Learning is a traditional part of Vietnamese culture. Multiple folk sayings reflect the value of learning: "A stock of gold is worth less than a bag of books"; "An uneducated person is an unpolished pearl"; "Learning is never boring; teaching is never tiring". Respect for teachers is required, as in "He who teaches you is your master, no matter how much you learn from him". Learning is a way of life in this country. The history of Viet Nam is adorned with people who, against the odds, overcame difficulty and studied to achieve high levels. One example is Mac Dinh Chi, who studied by himself at night in the faint light of the fireflies he kept in his hand because his family could not afford an oil lamp. As a result of his studies, he became a Zhuàngyuán, the title given to the scholar who achieved the highest score on the highest level of the Imperial examination in ancient Viet Nam.
When the country was reunited after the resistance wars, the Vietnamese government restarted the learning movement, a process initiated in 1945 by Ho Chi Minh, the first leader of the independent socialist republic of Viet Nam. Literacy classes and complementary education programmes (equivalent to primary education) were organised in schools, religious facilities like Catholic churches, Buddhist pagodas and large private houses. The establishment of two pilot CLCs in 1999 was a new national intervention by the government to adopt "CLC[s] as a delivery system of continuing education at the grassroots" (Okukawa 2009, p. 191), providing not only literacy programmes but also knowledge and skills that would empower learners and boost community development.
Currently, approximately 11,000 CLCs form the most extensive network of nonformal education institutions in Viet Nam, reaching nearly all communes and wards of the country, providing local learning activities for people ranging from literacy to post-literacy, from income-generation to leisure skills and knowledge, practical knowledge of civil laws, legitimate actions and legal processes. In 2018, there was a total enrolment of 20 million participants in these CLCs according to capacity-building material circulated internally by the Ministry of Education and Training (MOET Viet Nam 2018). The success of the CLC operation is largely due to the principle "of the people, by the people and for the people" (MOLISA Viet Nam 2018; MOET Viet Nam 2018), under the guidance and with the support of the government through policies. A sense of shared ownership thus encourages local people to engage in CLC activities.
Vietnamese CLCs are autonomous, while receiving professional guidance from the district Bureau of Education and Training (MOET Viet Nam 2008), and administrative management of the government at all levels. In each community, the head of the local People's Committee is also the Director of the CLC ( MOET Viet Nam 2008a, 2014), which gives the centre an advantage: easy alignment of CLC programmes and activities with central Government direction (Pham et al. 2015). The practical value of this was demonstrated during the first outbreak of the COVID-19 pandemic in 2020: following directives of the central Government, local governments implemented control measures, raised people's awareness of the disease, and gave advice on disease prevention. In their dual role as head of the local authority and leader of the CLC, these leaders organised appropriate CLC activities in cooperation with mass organisations like Viet Nam Women's Union and the Youth Communist Union.
CLCs where newspapers were provided to "promote reading habits and reinforce reading skills for neo-literates" (Leowarin 2010).
According to Suwithida Charungkaittikul of the Department of Lifelong Education of the University of Chulalongkorn, 9,524 CLCs spanned the country in 2018, reaching all rural corners. Thai CLCs are located in a variety of physical entities: district administration offices, schools, community halls, local elderly people's private houses, factories and temples. Buddhism is the dominant religion in Thailand, followed by around 95% of the population (ARDA 2021). The approximately 40,000 Thai Buddhist pagodas (MoE Thailand 2017) serve more than religious purposes. They are learning sites because Thai tradition requires that boys come and live in pagodas for an average time of three months before the age of 20, to learn to read and write, and to understand ethics and Buddhist history and philosophy. Thus, the pagodas are "the centre of all kinds of community activities, including learning" (Sungsri 2018, p. 214). Today they also host CLCs providing learning to all people, regardless of gender.
Operating on the same principle "of the people, by the people, and for the people" as in Viet Nam, Thai CLCs have transformed non-formal education provision from "bureaucracy-oriented to community-based approaches" (Leowarin 2010). They have a strong base in the National Education Act (RTG 1999) and are especially supported by the Non-formal and Informal Education Promotion Act (RTG 2008), which paves the path to decentralisation of education by institutionalising CLCs.
Two philosophical approaches have had great influence on adult education, and thus on CLC programmes in Thailand. Khit-pen, essentially conceived and introduced by Dr Kowit Vorapipatana, former head of government-led ALE, literally means having full ability to think (Sungsri and Mellor 1984;Nopakun 1985cited in Ratana-Ubol et al. 2021). It was initially applied to functional literacy programmes. The Sufficiency Economy of His Majesty the late King Bhumibol Adulyadej, promoting a way of life based on patience, perseverance, diligence, wisdom and prudence for balance and ability to cope appropriately with critical challenges, has given rise to a growing number of community learning centres called sub-district non-formal and informal education centres that teach local people a way of life that sufficiently and sustainably relies on natural resources.
Traditions, religious norms and philosophical bases blended into a strong foundation and strong government support have given Thai CLCs the characteristic they have today: diversity in location, but uniformity in purpose.
---
Japan
The Japanese kominkan, a distinctive learning centre phenomenon which sprang up post-World War II, was not a child of UNESCO's APPEAL project, but shares purposes and functions with its CLCs.
War-torn Japan needed to "build back better" -this slogan aptly applies to the period. Article XXVI in Japan's new constitution stated that "All people shall have the right to receive an equal education correspondent to their ability, as provided by law" (Prime Minister of Japan 1946). With this Constitution, the notion of democracy and a process of decentralisation were introduced into Japanese people's lives.
In 1946, the Ministry of Education issued a plan for the establishment of kominkan [public citizens' halls], in every prefecture. The purpose of kominkan is to facilitate social education, self-improvement and community development through a variety of learning activities initiated and implemented by local people themselves, and through social interaction including meetings between the community and local government.
Kominkan suited the lifestyle of most Japanese people at the time. "Until the mid-1950s it [Japanese society] was essentially a rural society", featuring a strong relationship manifested in the fact that "communities were structured into groups -the gonin gumi -and … the most important social value was the subordination of the individual to the group" (Thomas 1985, p. 81). Kominkan had a strong legal base in the 1947 Fundamental Law of Education (MEXT Japan 1947), and the Social Education Law of 1949 (MEXT Japan 1949). Kominkan quickly emerged as a tool for community empowerment, and became the backbone of social education. The number of kominkan soared from 3,534 in 1947 to 20,268 in 1950 (National Kominkan Association 2014) and peaked at 36,406 in 1955 (Arai and Tokiwa-Fuse 2013). Though the number is lower now, at 14,281 in 2018, according to the National Social Education Survey (Oyasu 2021, p. 98), for several social and administrative reasons, kominkan have retained their status as community-based learning sites that promote lifelong learning and a learning society at local levels.
Many factors contributed to the success and extensive network of kominkan in the 1950s. Among the most important was the legality of kominkan as entities established under and for purposes set out clearly in the Fundamental Law of Education in 1947, and the Social Education Law of 1949 (MEXT Japan 1947, 1949), and subsequently, "the national government […] standards for establishing and managing Kominkan and […] financial subsidies for their construction" (MEXT Japan 2008). Secondly, kominkan met the genuine needs of society in the post-war era when people felt an urge to acquire new values, new skills to improve their own lives, and new knowledge to rebuild the country. This process of democratisation and decentralisation also gave a strong boost to people's spirit, as they understood that they were actually managing their own learning; and that learning benefited their own lives in addition to building community integrity.
Collaborative learning in a general sense doubtlessly began when humans came to live together in groups, a primitive form of community. It was in living and learning from one another that Indigenous wisdom accumulated, based on which community systems developed. Today, CLCs exemplify the same correlation between individual members' learning and holistic community advancement. In this sense, kominkan are a good example of best practice.
---
Research initiative on CLCs in Asia
In 2013, a Regional Follow-up Meeting to the Sixth International Conferences of Adult Education (CONFINTEA VI) for the Asia and Pacific region suggested conducting country-based research in the context of the wider benefits of CLCs (UIL 2013). This was initiated by the National Institute for Lifelong Education (NILE) of the Republic of Korea, the UNESCO Institute for Lifelong Learning (UIL) and the UNESCO Regional Office in Bangkok (ibid.). All six countries which joined the project had already worked together within the APPEAL initiative on CLCs. Not least to enable comparability, research in each of these countries (Bangladesh, Indonesia, Mongolia, the Republic of Korea, Thailand and Viet Nam) was based on a joint design and questionnaire, and results were compiled in a synthesis report (Duke and Hinzen 2016). Despite the diversity of countries in terms of their history and their political, economic and cultural history and present situation, the synthesis report contained implications and proposals which are important here:
Policy, legislation and financing. The findings suggest that to create a system of CLCs adequate in quantity and quality throughout the country, support is needed similar to what is available through the formal education system to schools, universities and vocational training. The necessary policies and legislation related to CLCs must have a sound financial basis, in this sense no different from that for formal education. […].
Assessments, monitoring and evaluation. Learning and training assessments at local level should produce data relevant to the construction, planning and development of programmes, curricula and activities. These need to be guided by forms of continuous monitoring and regular participatory evaluation involving CLC learners and facilitators. All of this, including monitoring and evaluation, are professional support services to help local CLCs to improve (Duke and Hinzen 2016, p. 28).
In the next section, we turn to Africa, where CLCs are still evolving. While focusing to some extent on Ethiopia and Uganda, where some research into CLCs has already been conducted, we do not present the two countries separately. Rather, they serve as examples of what is, as mentioned earlier, part of current policy interventions in a larger number of African countries.
---
The African region
The concept and practice of community-based ALE and CLCs in East Africa, as in many other parts of Africa, have evolved over time. The folk development colleges of Tanzania which started in the 1970s as part of international cooperation with Sweden and their folk high schools, are a special case, but interesting since they continue to be supported by government funding today (Rogers 2019). Local experiences of community learning are also found in Kenya, where CLCs have been brought into sustainable development efforts (Nafukho 2019); and in Lesotho CLCs are being tested as providers of ICT services for the community (Lekoko 2020). In South Africa there are attempts to combine CLCs with efforts to improve popular and community education (von Kotze 2019). A more general literature review of CLCs in selected African countries (Hinzen 2021) found that they are places where not only youth and adults, but also children and the elderly can access a variety of learning and education opportunities as well as other services (like community libraries, vocational training or internet access) provided by local government sector offices, often implemented with the involvement of civil society organisations (Hinzen 2021).
---
Ethiopia and Uganda
Ethiopia took action in 2016 after a delegation visited Morocco to learn more about CLCs. The Moroccan concept and design were adapted to the Ethiopian context and ten pilot CLCs were set up in five regional states (Belete 2020). As the benefits for the community and service providers started to emerge, other countries like Uganda and Tanzania became interested, and exposure visits were arranged for key government officials and NGO experts. Uganda has since set up nine CLCs across four pilot districts (Jjuuko 2021) including, as in Ethiopia, plans for upscaling within and rolling out to more districts. The interest from communities, different government sector offices and other ALE stakeholders has exceeded expectations. Therefore it is worth investigating the rationale for setting up CLCs in the region; the services and modalities to offer the services; the involvement of stakeholders from both the demand and supply side; steps to start and operationalise CLCs; and considerations for the sustainability and institutionalisation of CLCs within an ALE system. The concept of CLCs in the region is still evolving, and new pathways for ALE are being considered, so in the next section, we also look at what is currently planned for future consideration.
---
Why is there a need for CLCs in Africa?
ALE services are usually offered through learner groups who gather and meet within or close to their communities on a regular basis with a facilitator or trainer for adult literacy classes, different forms of skills training and extension services. While this serves the purpose of bringing ALE closer to its users, it also has limitations, especially in rural communities. In Africa, ALE trainers and facilitators have to travel long distances and cannot always reach all communities in need. Serving everyone requires more staff and more funding. Another limitation concerns the types of services offered, because equipment and materials necessary for certain types of training are not always readily available. To make provision effective, a place is needed where different ALE services can be offered as a one-stop service, and communities of all age groups can gather to conduct their own affairs. In rural African communities, such infrastructure is often poor or lacking. CLCs have the potential to fulfil the needs and interest of ALE service users and providers.
---
What do CLCs offer in Ethiopia and Uganda?
In Ethiopia and Uganda, CLCs have evolved as spaces that offer not only ALE services, but different forms of learning and education opportunities within the spectrum of lifelong learning. In the early days of setting up CLCs in Ethiopia, a need was identified for a place within the CLCs where mothers could leave their children while attending classes. This evolved in many CLCs into full-scale early childhood development (ECD) centres, where preschool-aged children are cared for and can start learning. Urban CLCs in Addis Ababa found that this is also a source of income for the CLC, providing affordable day care for mothers who could not otherwise afford it. The CLCs are government-funded, and the mothers pay a small amount. In Uganda, school-going children attend additional support classes at CLCs. Youth and adults have a variety of services to choose from, based on the concept and definition of ALE in both countries. Integrated adult literacy classes combine literacy and numeracy with livelihood skills training, business skills training, life skills, etc.
Establishing libraries at each CLC, with books for all age groups, strengthens the skills of neo-literates, but also provides a resource centre for all ages, encouraging reading groups. One CLC in Ethiopia constructed an outdoor garden reading room as a quiet space for these activities. Youth enjoy sport and entertainment activities, and many youth clubs have been formed. In Ethiopia, the training offered by CLCs with support to engage in savings and loan schemes has assisted many youths to start a business and engage in farming. This has contributed to changing their minds about emigrating to other countries for their livelihood. Older adults have found a space to escape loneliness, enjoy discussions with their peers, have elder council meetings and engage with other age groups. The CLCs have thus also become a place for intergenerational learning. Beyond training and learning opportunities, CLCs also provide a service delivery point. CLCs in Uganda have schedules where different sector office experts are available on scheduled days with advice and services for individuals and small groups. Health sector offices in both countries have special days for vaccinations of children, health awareness-raising, and instructions on COVID-19 and other diseases. Paralegal services are offered, and local mediation of conflict within the community. CLCs have also started facilitating market days, where trainees can promote and sell their products.
The outbreak of COVID-19 required and prompted adaptation. Ethiopia produced a series of 20 radio programmes on business skills training. This also provided virtual outreach to a bigger CLC target group and promoted existing CLCs and the services they offer. As CLCs evolved into one-stop service centres, assessment of services became a new concern, and CLCs in Uganda started using community scorecards to assess services and have interface meetings between users and providers. Local government offices and politicians alike began to view CLCs as places where good local and integrated governance can be promoted (Republic of Uganda 2018).
---
Who is involved?
Stakeholder involvement should be viewed from both demand and supply sides of service delivery. The different categories of service users from the demand side are highlighted above. Their involvement goes beyond the use of services: CLC management committees are elected and formed with community members acting as a board, and regularly engaging with local government service providers to discuss the types and quality of service, sustainability and finances of their CLC. These committees are provided with training to fulfil their roles. Service providers in Ethiopia and Uganda are mostly local government sector offices, some partnering with NGOs who use the CLC facilities as places to provide services and contribute resources. The sector office experts and managers have formed cross-sectoral technical committees who jointly plan, budget, implement and monitor service provision through regular meetings, promoting horizontal and intersectoral integration. These committees are mirrored at higher governance levels, thereby promoting vertical integration through the spheres of governance.
---
How is the CLC policy intervention implemented?
The establishment and management of CLCs take place in two phases: the first one is an establishment phase which takes care of orienting stakeholders and community members, conducting a situation analysis and needs assessment, training both the CLC management committee and the sector experts, and forming the necessary cross-sectoral committees across levels of governance. It involves selecting a space where the CLC will be established and appointing and training a CLC coordinator from one of the government sector offices. With few exceptions. all CLCs in Ethiopia and Uganda have been established in existing buildings donated by local government, with sufficient land for demonstration sites and sports facilities. Renovation costs have been shared by government and NGOs. The operational second phase starts the process of delivering different services and putting systems in place for monitoring and managing the CLC.
---
Sustainability
To ensure permanence of CLCs and the sustainability of their services, it is crucial for CLCs to be institutionalised. The East Africa region uses the Adult Learning and Education System Building Approach (ALESBA) to build sustainable ALE systems across five phases (Belete 2020). CLCs are at the nexus of service delivery, and provide an entry point to build a system for service delivery from the ground up. ALESBA's conceptual framework considers four elements, each of which has five system building blocks (ibid.). The elements and building blocks ensure attention to an enabling environment for implementing CLCs nationally: embedding them into national policies, strategies and qualifications frameworks, as well as the necessary institutional arrangements across sectors of governance, and making space for nonstate actors such as universities and NGOs to play a role.
The establishment of CLCs in Ethiopia and Uganda has exceeded the expectations of both service users and providers. As the practice continues to evolve, more services are added to the CLC spectrum. The provision of computer and other forms of digital training, including radio programmes, is currently in preparation. Governments have scaled up CLCs with their own funds in different districts, and included further roll-out in plans and budgets for the coming years. Advocacy around CLCs should continue to ensure sustainability and inclusion for permanent service delivery within these ALE systems. Ideally, the experience from and success of these projects should be rolled out to other parts of the continent.
---
Research initiative on CLCs in Africa
Within the broader interest in lifelong learning and the institute's thematic priority of Africa, UIL analysed case studies from Ethiopia, Kenya, Namibia, Rwanda and the United Republic of Tanzania a few years ago, and identified a diversity of commu-nity-based activities (Vieira do Nascimento and Valdes-Cotera 2018). In 2021, a new research initiative was launched to provide deeper insight into the potential role for community-based ALE and CLCs (Owusu-Boampong 2021). A short survey comprising 12 questions was prepared to obtain comparable data on the status of CLCs in African countries. It was sent out to 35 African UNESCO Member States, using the channel established by UIL for requesting national reports from countries for the Fifth Global report on Adult Learning and Education (GRALE 5; UIL 2022). The 24 responses received by UIL provide substantial information on related legal frameworks, policies, strategies and guiding documents to support the operation of CLCs in African countries, and a variety of forms at different stages of institutionalisation in about 15 countries. Programmes in CLCs mainly include literacy, vocational and income-generation activities. Target groups are adults, women and youth, with an emphasis on disadvantaged groups and hard-to-reach communities (Owusu-Boampong 2021).
In terms of outcomes of CLC activities, the following were reported: creating a reading culture in the community; empowering communities economically; complementing formal education; providing recreational facilities; participation in community development; creating awareness in health and hygiene; promoting girls' education; facilitating skills development for citizenship and entrepreneurship; and enabling inter-generational learning (ibid.). Respondents considered the integration of additional services (such as basic health services) in CLCs as having the effect of increasing the effectiveness and sustainability of CLCs, embodying an infrastructure that provides access to communities which often feel deprived or left behind (ibid.). Further findings in the questionnaire include:
• Nine out of 24 participating countries reported that CLCs are specifically mentioned in their national ALE or NFE policies. • Half of the participating countries identified their Ministry of Education as the main entity or stakeholder responsible for coordinating CLCs in their country, followed by NGOs and local communities. • The majority of CLC programmes focus on the provision of basic education, and only two countries mention offering equivalency programmes; while four countries provide certification. • The provision of training and access to ICT was reported by 14 countries (ibid.).
Twenty countries reported a marked interest in receiving national capacity development in the form of CLC development guidelines, as well as expressing an interest in participating in peer exchange and sharing experiences among African countries (ibid.).
Also part of the research initiative at UIL was a review of documents available on community-based ALE in Africa (Hinzen 2021), and two of the recommendations emerging from that are the following:
• Governments in Africa should strengthen community-based ALE and CLC in their policies, legislation and financing from the education budget, and additionally within the inter-sectoral programmes of rural or community development, health and social services. CLCs should be integrated into international funding agendas. […] • More robust data on CLC are needed through regular collection of statistics on national, regional and global level in respect to providers, programmes and participants that could be used to inform future planning and development. GEM [the Global Education Monitoring Report] and GRALE, together with the UNESCO Institute for Statistics should get involved (ibid., pp. 38, 39).
---
Monitoring progress and negotiating strategies for action
The research initiatives in both the Asia Pacific and the Africa regions contribute to generating grassroots data which feed into global monitoring and reporting efforts, reflecting the status quo and highlighting areas in particular need of action.
---
GEM 2021/2022: non-state actors in education
Among the most prominent global monitoring reports on education more generally is UNESCO's Global Education Monitoring Report (GEM), which began monitoring the seven SDG 4 Targets (4.1-4.7) and three "means of implementation" (4.a-4.c) in 2016 (UNESCO 2016a), a somewhat challenging endeavour (Benavot and Stepanek Lockhart 2016). The latest GEM report (UNESCO 2021), includes relevant information on CLCs. In particular, its chapter on "Technical, vocational, tertiary and adult education" features a dedicated section stating that "Community learning centres have proliferated in many countries":
Embracing an intersectoral approach to education beyond formal schooling, CLCs can act as learning, information dissemination and networking hubs. … The establishment and management of CLCs has been bolstered by local and national government authorities and non-state actors, such as non-governmental organizations (NGOs), which have supported community engagement with financial and human resources. … CLCs are characterized by broad-spectrum learning provision that adapts to local needs" (ibid., pp. 259-260).
The report was well informed through a background paper on Non-state actors in non-formal youth and adult education (Hanemann 2021). Her findings relate to trends in the provision, financing and governance of ALE. Unsurprisingly, a key concern is "that many countries lack effective monitoring and evaluation systems including robust data on ALE. Moreover, this is also the case due to the multiplicity of non-state actors in this field" (Hanemann 2021, p. 15). She concludes with a set of recommendations. Two of them are relevant to the particular focus of our article on conditions conducive to ALE for lifelong learning and the potential role of CLCs and other community-based ALE institutions:
Governments should create an enabling legal, financial and political environment to make use of the full transformative and innovative potential of non-state actors in ALE. Non-state actors are usually well-placed to address situational, institutional, and dispositional barriers to engagement and persistence in learning, in particular those related to socio-cultural and gender issues. Such an enabling environment can best be achieved within collaborative efforts involving public and non-public partnerships (Hanemann 2021, p. 109; emphasis added).
Community participation and ownership must become a central goal of ALE programmes as it not only ensures the relevance and sustainability of programmes but also contributes to social cohesion. Therefore, the role of state and non-state ALE providers should increasingly become that of facilitator and assistance provider to help communities build strong local democratic governance of their programmes (ibid.; emphasis added).
---
GRALE 4: leave no one behind
Focusing on ALE more specifically are the Global Reports on Adult Learning and Education (GRALE) already mentioned in the introduction. In line with UIL's mandate initiated in 2009 (UIL 2010), they are prepared at three-year intervals. GRALE 4, monitoring the wider aspects of participation, equity and inclusion includes a chapter where certain institutional, situational and dispositional barriers to wider participation are analysed, and CLCs as a potential institutional infrastructure are discussed.
While ... CLCs may look somewhat different across countries and regions, their success is a result of the active involvement by the community, whose members act as learners, instructors, and managers, and the community has ownership of the site (UIL 2019, p. 165).
The report throws the net wider than the CLCs of today, by looking at communitybased ALE institutions from their historical beginnings in Europe and later in Latin America. The section concludes with an important statement in the context of the role of the state in supporting the conditions under which ALE institutions can operate well:
For ALE to function as an instrument for the promotion of democracy and in the struggle against inequality, two conditions have to be fulfilled: first, the state has to be ready to provide public funding to popular/liberal adult education institutions; and, second, while the state may set the overall purposes for funding popular/liberal adult institutions, they are given freedom in how to reach their goals (UIL 2019, p. 166).
---
Conference outcomes: commitments, declarations and frameworks
Progress in increasing participation in ALE (which of course includes the use of CLCs) is also reviewed at 12-year intervals during the UNESCO-led series of International Conferences of Adult Education (CONFINTEAs), resulting in outcome declarations and frameworks, such as the one adopted by participants of CONFINTEA VI, held in Belém, Brazil, in 2009. The relevance of the Belém Framework for Action (BFA) (UIL 2010) to CLCs is reflected in its call for "creating multi-purpose community learning spaces and centres" (ibid., p. 8).
The World Education Forum (WEF) is another conference series involving UNESCO, the World Bank and other international organisations operating in the field of education. Preceding the final ratification of the United Nations Education 2030 Agenda, the WEF session held in Incheon in the Republic of Korea in May 2015 resulted in a declaration "towards inclusive and equitable quality education and lifelong learning for all" (WEF 2016). Its relevance to CLCs is reflected in its "indicative strategy" which strives to make learning spaces and environments for non-formal and adult learning and education widely available, including networks of community learning centres and spaces and provision for access to IT resources as essential elements of lifelong learning (ibid., p. 52; emphasis added) In September 2015, the 2030 Agenda was ratified during the UN Sustainable Development Summit held in New York, USA (UN 2015; Boeren 2019). Another important document, adopted in November 2015 by the UNESCO General Conference in Paris, is the 2015 Recommendation on Adult Learning and Education (RALE) (UNESCO & UIL 2016). Its relevance to CLCs it its call for creating or strengthening appropriate institutional structures, like community learning centres, for delivering adult learning and education and encouraging adults to use these as hubs for individual learning as well as community development (ibid., p. 11; emphasis added). These declarations, frameworks and recommendations are collaborative outcome documents jointly drafted by UNESCO Member States, international organisations, public and private sectors, etc., ideally in consultation with civil society and other actors and stakeholders. Many of them call for improvement of data collection and the provision of appropriate monitoring and evaluation services.
In the run-up to CONFINTEA VII, to be held in Marrakech, Morocco, in June 2022, the drafting of the Marrakech Framework for Action (MFA) is already being prepared by way of an online consultation. As we are writing this article, the current draft includes the following section:
Redesigning systems for ALE: We commit to strengthening ALE at the local level, as a strategic dimension for planning, design and implementation for learning programmes, and for supporting and (co-)funding training and learning initiatives such as community learning centres. We recognize the diversity of learning spaces, such as those in technical and vocational education and training (TVET) and higher education institutions, libraries, museums, workplaces, public spaces, art and cultural institutions, sport and recreation, peer groups, family and others. This means reinforcing the role of sub-national governments in promoting lifelong learning for all at the local level by, for example, pursuing learning city development, as well as fostering the involvement of local stakeholders, including learners (CONFINTEA VII online consultation, accessed 29 March 2022).
It is encouraging that members of the ALE community and beyond now have the opportunity to comment on all aspects of the MFA, which will be an important document for the next twelve years. Based on best practice examples, future policy recommendations could be enriched. The ways in which CLCs and other forms of community-based ALE institutions are taken up in different UNESCO Member States through governments and civil society actors (CCNGO 2021), towards frameworks for action (Noguchi et al. 2015) has so far been uneven; the forms they take, and the management arrangements, are diverse and mainly "work in progress".
We hope to contribute to this work by suggesting a few recommendations of our own in the next section.
---
Recommendations
Based on our our discussion of examples and experiences from countries in the Asian and African regions and their deeper analysis, in this section, we put forward some recommendations of our own towards creating conditions conducive to having more and better ALE for lifelong learning, and integrating a role for CLCs and other community-based ALE institutions:
• Rethink and redesign educational governance and the education system to take full account of all sub-sectors from a lifelong learning perspective, and include all areas of formal, non-formal and informal education. • Acknowledge ALE as a sub-system of the education system, in a similar way that formal schooling, vocational education and training (VET) and higher education (HE) are acknowledged. This will require including different entry points and communication messages in advocacy strategies.
• Set up a comprehensive ALE system, including all system-building blocks and elements, such as an enabling environment, management processes, institutional arrangements and technical processes. • Acknowledge and promote the reality that ALE, like any education sub-system, needs a place and infrastructure where it can be delivered. CLCs and other community-based institutions can be developed as cornerstones to local infrastructure because they:
-offer a one-stop shop for a variety of ALE services to all target groups on the lifelong learning continuum and across sectors; -can therefore improve access to ALE service delivery and increase participation, including those too often excluded; -can reduce costs in ALE and other service delivery modalities for local governments because the costs for operating the CLC can be shared across sectors; and -provide opportunities for other stakeholders such as NGOs, universities and the private sector to use CLCs as a platform for engagement and cooperation.
• Make CLCs part of the institutional arrangements of the national ALE implementation structure across all spheres of governance. Furthermore, cross-sectoral coordination structures should be put in place, including community representation and participation. • Strike a balance between community needs and interests and national policies and priorities: although the communities' needs and interests should be the main driver of the types of services to be delivered at CLCs, a balance should be struck with government priorities as elaborated in national and local development plans. This will ensure political and financial will and commitment towards CLCs. • Collect all data and information in the provision and practice of CLCs. These data should documented, recorded and used. They should feed into an ALE monitoring system as part of the overall education statistics to provide an evidence base for further advocacy for strengthening ALE and CLCs.
---
Conclusions
The research question we set out in the introduction of this article was: What conditions are conducive to having more and better ALE for lifelong learning -and which roles can CLCs and other community-based ALE institutions play? Throughout this article, we have considered information and related discourses at national and global levels. In each of the sections we contemplated findings from relevant literature, and more in-depth research on experiences and examples from the Asia-Pacific and African regions as well as insights related to interventions at the global level. We found that CLCs and community-based ALE are operating in many parts of the world and are diverse in many ways, including their nature and stages of institutionalisation and professionalisation, and the ways in which they are integrated, or not, in overall educational governance. What seems to be similar all over the world is that ALE and thereby CLCs remain marginal to all the other education sub-sectors. Therefore, CLCs are in dire need of better recognition, services and support. Unless support is substantially increased, especially in terms of financing the commitment to lifelong learning for all (Archer 2015;Duke et al. 2021), one can hardly imagine any of the necessary changes occurring. ALE is grossly underfunded in almost all national education budgets, and too often neglected in policies and related legislation. In too many countries, ALE is underrepresented in data collection, and the work of CLCs and other community-based learning institutions does not even find its way into systems of educational statistics. This makes monitoring efforts nationally and subsequently globally more than difficult. The provocative saying applies: "You measure what you treasure."
However, despite all the deficiencies which have been identitified, many examples and experiences show that with improved conditions for ALE and CLCs, these can come closer to what they aspire to reach. This includes an enabling environment with policies and legislation; an overarching structure for educational governance; an adult learning system with related institutions, professionalisation for all working in the sector; as well as organisational developments for CLCs and other institutions. In addition, the current pandemic has shown the need to reflect more on new forms of blended learning and digital modes, and demonstrated their consequences for learners and their institutions.
CLCs have become a convenient catch-all for locally provided and at least partly determined and managed opportunities for institutionalised forms of ALE, and for informal meeting and learning in local community settings. In this article, we show that there are distinctive CLCs as well as other community-based ALE institutions which differ in many of their features, but also have much in common. What is crucial here is a better understanding of how policymaking can combine bottom-up and top-down approaches within decentralisation efforts. The questions remain whether global-and national-level policies are working against local-level bottom-up practices of diverse local communities, or whether there might be possibilities in both directions, emerging from constructive evolution allied to applied learning and better practice.
GRALE 4 closes with the following statement:
This report has argued that a focus on participation in ALE is key to achieving the SDGs. This must mean reviewing policies in the light of the evidence on participation, and investing in sustainable provision that is accessible to learners from all backgrounds, as well as systematically supporting demand among those who have been the most excluded in the past. This will enable ALE to play its full, and wholly essential, part in achieving the SDGs (UIL 2019, p. 177; emphases added).
Current socio-political and ecological malaise requires more locally based community understanding. Many changes and developments in ALE and lifelong learning are needed at this time of interlocking critical social, political, technological, cultural and ecological change, with a climate crisis and the incipient "great extinction". The ambitious SDGs, with their goals and targets for change by 2030, seriously underestimate the centrality of ALE to coping with change, and the latent reach and wider scope of ALE within lifelong learning, as a test of what is and is not sustainable in the longer term.
---
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Sonja Belete is an independent consultant with substantial experience in the fields of adult education, sustainable livelihoods, women's economic empowerment, good governance and systems building. She holds an MA degree in Adult Education and has authored several manuals, guidelines and articles. Her career with international NGOs such as DVV International, ActionAid, CARE and the UN provided her with exposure to Southern and the East/Horn of Africa, with occasional support missions to West and North Africa. She has managed large-scale national and regional programmes and was responsible for the pilot testing and up-scaling of CLCs in Ethiopia and Uganda as part of her work with DVV International.
---
Chris Duke
Professor, was founding CEO and Secretary-General of Place And Social Capital And Learning (PASCAL) and founding Secretary-General of the PASCAL International Member Association (PIMA). He is now Editor of the bimonthly PIMA Bulletin. Previously, he played leading roles in other international ALE civil society bodies, including the International Council for Adult Education (ICAE) and the Asia South Pacific Association for Basic and Adult Education (ASPBAE), and nationally in the UK and Australia. He qualified in Education, History and Sociology and was awarded an Hon DLitt by Keimyung University, Republic of Korea, for work in the field of lifelong learning. He has served as Professor and Head of Lifelong Learning in Australian, New Zealand and UK universities, and as President of the University of Western Sydney (UWS) Nepean. He has consulted extensively for UNESCO, the OECD, the EU and other international national and local bodies. Angela Owusu-Boampong is an education programme specialist at the UNESCO Institute for Lifelong Learning (UIL) holding a postgraduate degree in Adult Education from the Freie Universität, Berlin, Germany. She contributed to the coordination of the Sixth International Conference on Adult Education (CONFINTEA VI) held 2009 in Belém, Brazil; the CONFINTEA VI Mid-term Review held 2017 in Suwon, Republic of Korea; as well as CONFINTEA VII, to be held in June 2022 in Marrakech, Morocco. This included organising related regional and global preparatory and follow-up processes. She previously contributed to developing the Global Report on Adult Learning and Education (GRALE) UNESCO's Recommendation on Adult Learning and Education (RALE 2015) and Curriculum globALE, a core curriculum for the training of adult educators. Her current research focuses on promoting inclusive learning environments for youth and adult learners.
Khau Huu Phuoc already had 22 years' experience in teacher training and curriculum design at Ho Chi Minh University of Education, Vietnam, before he transferred to the Regional Centre for Lifelong Learning (SEAMEO CELLL). As Manager of Research and Training at the Centre, he has conducted workshops and seminars aiming to promote understanding of lifelong learning and adult education, and sharing of related good practices for master trainers and teachers of non-formal education from the region. From 2016 to 2018, he coordinated the eleven Southeast Asian countries in the UNESCO Institute for Lifelong Learning (UIL)'s regional project "Towards a Lifelong Learning Agenda for Southeast Asia". Most recently, he developed the Curriculum for Managers of Adult Education Centres for international use by DVV International. He has contributed as a speaker to various events organised by the Asia South Pacific Association for Basic and Adult Education (ASPBAE), UNESCO Bangkok, DVV International, and has written articles for DVV International and the Friends of PASCAL International Member Association (PIMA).
---
Authors and Affiliations
| 68,669 | 1,672 |
e0e3056181831577ab837eb7d07e74dfd8fb6ff3 | William Foote Whyte, Street Corner Society and social organization. | 2,014 | [
"JournalArticle"
] | Social scientists have mostly taken it for granted that William Foote Whyte's sociological classic Street Corner Society (SCS, 1943) belongs to the Chicago school of sociology's research tradition or that it is a relatively independent study which cannot be placed in any specific research tradition. Social science research has usually overlooked the fact that William Foote Whyte was educated in social anthropology at Harvard University, and was mainly influenced by Conrad M. Arensberg and W. Lloyd Warner. What I want to show, based on archival research, is that SCS cannot easily be said either to belong to the Chicago school's urban sociology or to be an independent study in departmental and idea-historical terms. Instead, the work should be seen as part of A. R. Radcliffe-Brown's and W. Lloyd Warner's comparative research projects in social anthropology. | INTRODUCTION
Few ethnographic studies in American social science have been as highly praised as William Foote Whyte's Street Corner Society (SCS) (1943c). The book has been re-published in four editions (1943c, 1955, 1981, 1993b) and over 200,000 copies have been sold (Adler, Adler, & Johnson, 1992;Gans, 1997). John van Maanen (2011 [1988]) compares SCS with Bronislaw Malinowski's social anthropology classic Argonauts of the Western Pacific (1985 [1922]) and claims that "several generations of students in sociology have emulated Whyte's work by adopting his intimate, live-in, reportorial fieldwork style in a variety of community settings" (p. 39). 1 Rolf Lindner (1998) writes that even "one who does not share van Maanen's assessment cannot but see the two studies as monoliths in the research landscape of the time" (p. 278). To be sure, the Chicago school of sociology had published contemporary sociological classics, such as The Hobo (Anderson, 1961(Anderson, [1923]]), The Gang (Thrasher, 1963(Thrasher, [1927]]), The Ghetto (Wirth, 1998(Wirth, [1928]]), and The Gold Coast and the Slum (Zorbaugh, 1976(Zorbaugh, [1929]]). But none of these empirical field studies were as deeply anchored in the discipline of social anthropology or had been, to use Clifford Geertz's (1973) somewhat worn expression, equally "thick descriptions" of informal groups in the urban space. Whyte's unique ability to describe concrete everyday details in intersubjective relations created a new model for investigations based on participant observations in a modern urban environment.
SCS is a study about social interaction, networking, and everyday life among young Italian-American men in Boston's North End (Cornerville) during the latter part of the Great 1. Typical examples of this research tradition are Anderson (2003Anderson ( [1976]]), Gans (1982Gans ( [1962]]), Kornblum (1974), Liebow (2003Liebow ( [1967]]), Suttles (1968), Vidich & Bensman (2000[1958]).
OSCAR ANDERSSON graduated with a PhD in Social Anthropology from Lund University, Sweden, in 2003. His thesis is about the development of the Chicago School of Urban Sociology between 1892 and about 1935. In 2007, his thesis was published by Égalité in a completely new edition. He is currently working with the same publisher on the Swedish translations, and publications, of books that are regarded as part of the Chicago school heritage and beyond, with comprehensive and in-depth introductions that place each book in the context of the history of ideas. Previous titles include The Hobo (1923) andStreet Corner Society (1943). Correspondence concerning this article should be sent to Oscar Andersson,Malmoe University,Faculty of Health and Society,Department of Health and Welfare Studies,Sweden;[email protected]. Depression. Part I of SCS describes the formation of local street gangs, the corner boys, and contrasts them with the college boys in terms of social organization and mobility. Part II outlines the social structure of politics and racketeering. Whyte spent three and a half years between 1936 and 1940 in the North End, which also gave him a unique opportunity to observe at close range how the social structure of the street corner gangs changed over time.
The study is still used as a valuable source of knowledge in concrete field studies of group processes, street gangs, organized crime, and political corruption (Homans, 1993(Homans, [1951]]; Short & Strodtbeck, 1974[1965]; Sherman, 1978). Today, SCS feels surprisingly topical even though the book first appeared 70 years ago. What seems to make the study timeless is that Whyte manages in a virtually unsurpassed way to describe people's social worlds in their particular daily contexts. Adler, Adler, & Johnson (1992, p. 3) argue in the same manner that SCS represents a foundational demonstration of participant observation methodology. With its detailed, insightful, and reflexive accounts, the methodological appendix, first published in the second edition, is still regarded as one of the premier statements of the genre. [ . . . ] SCS stands as an enduring work in the small groups literature, offering a rich analysis of the social structure and dynamics of "Cornerville" groups and their influence on individual members. SCS has thereby come to have something of a symbolic significance for generations of field researchers in complex societies. As Jennifer Platt (1983) examines in her historical outline of participant observation methodology, this took place mainly after Appendix A was published as an additional part in the 1955 second edition (p. 385). Lindner (1998) also points to the importance of the Appendix, and thinks that "With the new edition the reading of SCS is stood on its head: now the reader begins as a rule with the appendix, and then turns to the actual study" (p. 280). As a consequence, SCS has come to be considered as "the key exemplar in the textbooks of 'participant observation'" (Platt, 1983, p. 385); furthermore, numerous studies have used it as a symbol of how participant observations ought to be done. After SCS gained its iconoclastic status, the knowledge of the historical development of the study seems to have lost its importance or even been forgotten.
For this reason, it might not be so surprising that researchers, as the introductory quote from van Maanen indicates, have often taken it for granted that SCS belongs to the Chicago school's research tradition (Klein, 1971;Jermier, 1991;Schwartz, 1991;Boelen, 1992;Thornton, 1997) or that it is a relatively independent study that cannot be placed in any specific research tradition (Ciacci, 1968;Vidich, 1992). There are at least four reasons for this. First, although Whyte was awarded a prestigious grant from the Society of Fellows at Harvard University in fall 1936, he was not a doctoral student at the university. Instead, he defended his doctoral dissertation at the University of Chicago in 1943. Second, SCS is about classical "Chicago" topics, such as street gangs, organized crime, police corps, and political machinery. Third, Whyte conducted fieldwork in an urban environment like many Chicago sociologists had previously done in the 1920s and 1930s. Finally, parts of SCS have, together with Chicago classics, been included in the Chicago school of sociology compilation volumes; a typical example is The Social Fabric of the Metropolis: Contributions of the Chicago School of Urban Sociology (1971). Given these facts, it is quite easy to take for granted that Whyte was part of the Chicago school's research tradition or was an independent researcher in a historical period before anthropology at home had been established as a research field.
However, by using archival documents from Cornell University and other historical texts, I have traced the SCS to a social-anthropological comparative tradition that was established by W. Lloyd Warner's Yankee City Series, and later in Chicago with applied research in the Committee on Human Relations in Industry during the period 1944-1948. The committee, led by Warner, had the aim of bridging the distance between academia and the world of practical professions. Whyte's anthropological schooling at Harvard University led him to A. R. Radcliffe-Brown's structural functional explanatory model in SCS. 2 In the first two sections, I will describe Whyte's family background and the circumstances behind his admission to the prestigious Society of Fellows at Harvard University. In the following two sections, I will first examine how Whyte came to study corner boys' and college boys' informal structure in Boston's North End. I will then analyze why Conrad M. Arensberg's and Eliot D. Chapple's observational method was such a decisive tool for Whyte for discovering the importance of informal structure and leadership among street corner gangs. In the next section, I will outline the reasons that led Whyte to defend SCS as a doctoral dissertation in the sociological department at the University of Chicago and not Harvard University. Thereafter, I will examine why Whyte's conclusion that Cornerville had its own informal social organization was such a ground-breaking discovery in social science. In the two final sections, I will situate Whyte's position in the historical research landscape of social anthropology and sociology in the 1920s, 1930s, and 1940s and then, with the help of a diagram, set out which researchers exercised the most important direct and indirect influences on his thinking.
---
BIOGRAPHICAL BACKGROUND
William Foote Whyte was born on June 27, 1914 in Springfield, Massachusetts. Whyte's grandparents had immigrated to the United States from England, the Netherlands, and Scotland. His parents were John Whyte (1887Whyte ( -1952) ) and Isabel van Sickle Whyte (1887Whyte ( -1975)). John and Isabel met when they were in Germany, each on a university grant, and working on their theses-doctoral and masters, respectively-in German. After John received his doctorate and obtained employment as a lecturer at New York University, the family first settled in the Bronx district of New York City, but soon moved to the small town of Caldwell, New Jersey. Whyte, an only child, grew up in a Protestant middle-class family that appreciated literature, classical music, art, and education. During his earliest years, Whyte lived with different relatives, as his mother caught tuberculosis and his father was discharged by New York University when it abolished the German department at the outbreak of World War I. Due to his movements between families and since his parents brought him up to be a self-reliant boy, he often felt lonely and learned to keep his feelings to himself (Whyte, 1984(Whyte, , 1994(Whyte, , 1997;;Gale Reference Team, 2002). 3 Even though John Whyte himself had received a strict upbringing in the Presbyterian Sunday school, he held that it was his son Whyte's choice whether or not to go to church. John said that he had gotten so much church instruction while growing up that it would last a lifetime. Only after Whyte had acquired his own family did he begin regular visits to the Presbyterian congregation. The reason was that he looked up to clergymen who preached for social equality and justice. On the other hand, he was not so fond of priests who wanted to 2. Robert M. Emerson (2001Emerson ( [1983]]) certainly places Whyte and SCS in Harvard University's social anthropology tradition, but does so only briefly regarding the issue of method. Instead, Howard S. Becker supports the assumption that social science research has usually overlooked the fact that Whyte was educated in social anthropology at Harvard University (Becker, 1999, 2003, e-mail correspondence with Oscar Andersson dated July 11, 2009). 3. According to the anthropologist Michael H. Agar (1980, p. 3), the experience of alienation from one's own culture is common to many anthropologists. Whyte describes himself as a social anthropologist rather than a sociologist when he came from Boston to Chicago in 1940. The sense of estrangement also makes it easier for anthropologists to connect with other cultures. In his autobiography, significantly titled Participant Observer (1994), Whyte tells repeatedly of his emotional difficulties in feeling involved in the middle class's social activities and club life.
save lost souls and gave superficial sermons about contemporary problems. Thus already from childhood, William Whyte learned to make independent decisions and to feel empathy for poor and vulnerable people (Whyte, 1984(Whyte, , 1994(Whyte, , 1997;;Gale Reference Team, 2002).
In the autumn of 1932, aged 18, William Whyte was accepted at Swarthmore College, located in a suburb of Philadelphia, as one of five students granted a scholarship. Whyte devoted most of his time to studying for examinations and writing articles as well as plays that were performed in the college area. Already at age 10, he had been encouraged by his parents to write short stories, and while at Bronxville High School he published an article every Tuesday and Friday in a local newspaper, The Bronxville Press (Whyte, 1970(Whyte, , 1994)).
As a second-year student at Swarthmore, Whyte got the opportunity to spend a weekend at a settlement house in a Philadelphia slum district. This experience proved decisive for his future career. In a letter to his parents dated March 10, 1934, he wrote It is foolish to think of helping these people individually. There are so many thousands of them, and we are so few. But we can get to know the situation thoroughly. And that we must do. I think every man owes it to society to see how society lives. He has no right to form political, social, and economic judgement, unless he has seen things like this and let it sink in deeply. (Whyte, 1994, p. 39) It was after this experience that Whyte realized that he wanted to write about the situation of poor people and daily life in the American urban slums. His interest in writing about corrupt politicians and slum poverty was also aroused by the investigative journalist and social debater Lincoln Steffens' 884-page autobiography, which he had devoured during the family's journeys in Germany in 1931. Three chapters in Steffens' book dealt with the extensive political corruption in Boston (Steffens, 1931;Whyte, 1994).
---
THE SOCIETY OF FELLOWS
In 1936, a senior researcher recommended that William Whyte be admitted to the prestigious Society of Fellows at Harvard University. The background to this recommendation was a 106 page essay, written and published the year before, with the title "Financing New York City." It drew great attention from politicians and civil servants in New York City, and his teacher in economics-which was also Whyte's main subject-thought that it was better written than many doctoral dissertations he had read. After considering several proposals for further studies and career opportunities, including an invitation to work for the city of New York, Whyte decided to accept the offer from the Society of Fellows. The associated grant meant that the researcher had the same salary as a full-time employed assistant professor at Harvard University, and could do research for three to four years on any topic, with free choice among the university's rich range of courses. Thanks to the basic freedom in selecting a research subject, it was not unusual for grantees to change subject after being accepted. The only academic restriction with the generous research grant was that the resultant writings could not be presented as a doctoral dissertation. This did not strike Whyte as a drawback when he was accepted; on the contrary, he regarded academic middle-class existence as all too limited and boring. The grant at Harvard gave Whyte, at barely 22, the opportunity to pursue what he had wanted to do since his time at Swarthmore College-an ethnographic slum study. Ever since he had read Steffens' notable autobiography and visited a Philadelphia slum district, he had dreamt of studying at close quarters and writing about a social world that was mostly unknown to the American middle class (Whyte, 1994(Whyte, , 1997)). Whyte tells in his autobiography: Like many other liberal middle-class Americans, my sympathies were with the poor and un-employed, but I felt somewhat hypocritical for not truly understanding their lives. In writing Street Corner Society, I was beginning to put the two parts of my life together. (Whyte, 1994, p. 325) The anthropologist and sociologist Arthur J. Vidich (1992) argues with insight that "Methods of research cannot be separated from the life and education of the researcher" (p. 84). In order to really emphasize what an exotic social world Boston's North End was for the rest of the population, Whyte-like an anthropologist who visits an aboriginal people for the first time-begins the first paragraph of the unpublished "Outline for Exploration in Cornerville" in July 1940 as follows: This is a study of interactions of people in a slum community as observed at close range through 3 1 / 2 years of field work. I call it an exploration, because when I came into Cornerville, its social organization was unknown to me as if I had been entering an African jungle. In this sense, the field work was a continual exploration-of social groupings, of patterns of action. 4Whyte moved to Harvard University for the start of the autumn semester in 1936. He wanted to get there when the university was celebrating its 300th anniversary and was hailed in that regard as the oldest university in the United States. The Society of Fellows provided him with comfortable student quarters in Winthrop House on the university campus. The only formal requirement for a younger member was to attend dinners on Monday evenings. These served as a ritual uniting young and old members of the Society, and Whyte took part in them even after, at the beginning of 1937, he moved in above the Martini family's Capri restaurant in the North End.
The Society was led, during the period 1933-1942, by the rather conservative biochemist Lawrence J. Henderson. It consisted of fellow students, such as Conrad M. Arensberg, Henry Guerlac, George C. Homans, John B. Howard, Harry T. Levin, James G. Miller and Arthur M. Schlesinger, Jr. Although the young Whyte did not always feel comfortable with Henderson's authoritarian style of leadership, he looked up to him for his scientific carefulness. Another mentor was the industrial psychologist Elton Mayo, a colleague of Henderson and Warner, who led the Hawthorne study (1927)(1928)(1929)(1930)(1931)(1932) at the Western Electric Company in Cicero outside Chicago. Mayo is known chiefly for having conducted social-scientific field studies of industries, which Whyte was also later to do during his time at Chicago and Cornell universities. The classmate who would come to have by far the greatest importance for Whyte was Arensberg; he became his close friend and mentor during the study of the Italian-American slum district in the North End (Whyte, 1970(Whyte, , 1984(Whyte, , 1994(Whyte, , 1997)). 5THE NORTH END DISTRICT IN BOSTON William F. Whyte began his field study in the North End during the fall of 1936. As previously mentioned, it was his visit to a Philadelphia slum and his reading of Steffens' work that gave him the idea of doing his own part in studying slum districts for the cause of progressive social change and political reform. During his first weeks at Harvard, he explored Boston's neighborhoods and sought advice at various social agencies. It was only after this initial survey that he settled on the North End as the place for his study. After a while, Whyte (1994, p. 62) decided to study the North End because this district best met his expectations of how a slum area looked I had developed a picture of rundown three-to five-story buildings crowded together. The dilapidated wooden-frame buildings of some other parts of the city did not look quite genuine to me. One other characteristic recommended the North End on a little more objective basis: It had more people per acre than any section of the city. If a slum was defined by its overcrowding, this was certainly it.
In an unpublished field study of the Nortons (street gang) from the autumn of 1938, Whyte gave a more neutral explanation with quantitative criteria for why he chose to study this city district:
According to figures from the Massachusetts Census of Unemployment in 1934, the population of the North End was 23 411. These people were housed on 35 acres of land. With a density of about 670 persons per acre in 1934, the North End was reported to be the most congested district in the United States. The neighboring West End has 342 people per acre, just slightly over half of the density of the North End, and other sections of Boston are much less thickly populated. 6 As Whyte writes in Appendix A, in SCS, the models for his study were the community studies by the social anthropologists Robert S. Lynd andHelen Merrell Lynd, Middletown (1957 [1929]) and Middletown in Transition (1937) about Muncie in Indiana, and W. Lloyd Warner's not yet published five volumes in the Yankee City Series (1941Series ( -1962) ) about Newburyport in Massachusetts. 7 This is indicated not least by the fact that Whyte introduced his case study of the Nortons with a short social overview of the city district, which he called "A Sketch of the Community Surroundings." 8 Whyte also wrote the project plans titled "Plan for the North End Community Study" and "Plan for a Community Study of the North End," respectively, at the end of 1936 and beginning of 1937, which show that he intended to investigate the inhabitants' and the district's history, cultural background, economics, leisure time activities, politics, educational system, religion, health, and social attitudes-in other words, to make a comprehensive community study. 9 What characterizes these extensive American social studies is that they try to completely describe and chart the complex cultural and social life of a town or city district. As models they have anthropology's holistic field studies of the more limited cultures and settlements of aboriginal populations. For while neither the Lynds nor Warner specifically made studies of slum areas, they did social-anthropological field work in modern American cities. This was exactly what Whyte wanted to do in the North End, although with a special focus on the slums. The community studies were also models for more limited socialscientific field studies of industries, mental hospitals, and medical hospitals after World War II 6. Whyte, William Foote. Papers. #/4087. Kheel Center for Labor-Management Documentation and Archives, Catherwood Library, Cornell University, 14087 Box 2, Folder 32. 7. Whyte could also have taken part in the research project Yankee City Series during his stay in Chicago in the early 1940s, but he was persuaded by his supervisor Warner to finish his doctoral degree first (Whyte, 1994). 8. Whyte, William Foote. Papers. #/4087. Kheel Center for Labor-Management Documentation and Archives, Catherwood Library, Cornell University, 14087 Box 2, Folder 32. 9. Whyte, William Foote. Papers. #/4087. Kheel Center for Labor-Management Documentation and Archives, Catherwood Library, Cornell University, 4857 Box 2A, Folder 10. JOURNAL OF THE HISTORY OF THE BEHAVIORAL SCIENCES DOI 10.1002/jhbs (Whyte, 1967(Whyte, [1964]]; Kornblum, 1974;Becker, Geer, Hughes, & Strauss, 1977[1961]; Burawoy, 1982Burawoy, [1979]]; Goffman, 1991Goffman, [1961]]). 10 Fairly soon, however, Whyte came to revise his original plan and, instead, confined his study to the street gangs' social, criminal, and political organization in the city district. 11 He (1993b) came to this crucial understanding by connecting his field observations of the Nortons (corner boys) and Italian Community Club (college boys) and "their positions in the social structure (p. 323)." The theoretical framework was . . . first proposed by Eliot D. Chapple and Conrad M. Arensberg (1940), [where] I concentrated attention on observing and roughly quantifying frequencies and duration of interactions among members of street-corner gangs and upon observing the initiation of changes in group activities (Whyte, 1993b, p. 367). Whyte (1967Whyte ( [1964]]) writes further that, after about 18 months of field research, he "came to realize that group studies were to be the heart of my research (p. 263)." In the autumn of 1938, his case study of the Nortons arrived at the conclusion that even though the study "may apply to other groups of corner boys, I will specifically limit their application to this group which I have studied and not attempt to generalize for other groups." 12 Whyte's conclusion is probably a concession to Lawrence J. Henderson's strictly positivistic view of science, since he cited the study in his application for an extension of the research grant from the Society of Fellows at Harvard University. Long afterward, he (1993a) explained that the prevailing view of science at Harvard University in the 1930s emphasized "a commitment to 'pure science,' without any involvement in social action (p. 291)." As a result of Whyte's not having studied the North End inhabitants' working conditions, housing standards, family relations, industries, school system, or correspondence with native countries, it is thus not obvious that one should regard SCS as a community study (Vidich, Bensman, & Stein, 1964;Ciacci, 1968;Bell & Newby, 1972[1971], pp. 17-18; Gans, 1982Gans, [1962]]; Whyte, 1992Whyte, , 1994Whyte, , 1997)).
---
SOCIAL ANTHROPOLOGY AND ARENSBERG'S AND CHAPPLE'S OBSERVATIONAL METHOD
William Whyte's introduction to social anthropology occurred through a course-"The Organization of the Modern Communities"-which was taught by Conrad M. Arensberg and Eliot D. Chapple. He (1994, p. 63) found the course rewarding, but more important was that he thereby got to know Arensberg: 13 He [Arensberg] took a personal interest as my slum study developed, and we had many long talks on research methods and social theory. He also volunteered to read my early notes and encouraged me with both compliments and helpful criticisms.
Archival documents from Cornell University show that Arensberg read and commented on SCS from its idea stage until the book was published in December 1943. The recurrent discussions and correspondence which Whyte had with Arensberg about the study's 10. Almost 20 years later, Herbert J. Gans (1982Gans ( [1962]]) would make a community study of the adjacent West End. 11. Whyte (1993b) writes in Appendix A: "As I read over these various research outlines, it seems to me that the most impressive thing about them is their remoteness from the actual study I carried on (p. 285)." 12. Whyte, William Foote. Papers. #/4087. Kheel Center for Labor-Management Documentation and Archives, Catherwood Library, Cornell University, 14087 Box 2, Folder 32. 13. Arensberg was formally admitted by the Society of Fellows as an anthropologist during the period 1934-1938, and Whyte as a sociologist during 1936-1940. . disposition, field-work techniques, and group processes contributed greatly to problematizing and systematizing Whyte's observations of the street gangs. 14 Whyte (1993a) tells that "While I was in the field, 1936-1940, I thought of myself as a student of social anthropology. I had read widely in that field, under the guidance of Conrad M. Arensberg (p. 288)." During the period 1932-1934, under Warner's supervision, Arensberg had done field work in County Clare in Ireland. Arensberg's field study resulted in the books The Irish Countryman (1968Countryman ( [1937]]) and Family and Community in Ireland (1968Ireland ( [1940]]) together with Solon T. Kimball. It belonged to a still, in some ways, unequaled social-scientific research project that was led by Warner and based on the cross-cultural comparative sociology of Emile Durkheim and A. R. Radcliffe-Brown. Radcliffe-Brown's and Warner's comparative social-anthropological field studies on different cultures and social types were path-breaking in several respects, and had the explicit aim of generating universal sociological theories about man as a cultural and social being (Warner, 1941a(Warner, , 1941b(Warner, , 1959(Warner, , 1962(Warner, [1953]], 1968 [1940]; Radcliffe-Brown, 1952, 1976[1958]; Whyte, 1991Whyte, , 1994Whyte, , 1997;;Stocking, 1999Stocking, [1995]]).
Arensberg wrote, together with Chapple, a social-anthropological method book about field observations which is almost forgotten today-Measuring Human Relations: An Introduction to the Study of the Interaction of Individuals (1940)-and which passed Henderson's critical inspection only after five revisions. Their interactionist method would be used by Whyte throughout SCS and during the rest of his academic career. It emphasized that the researcher, through systematic field observations of a specific group, such as Nortons, can objectively "measure" what underlies the group members' statements, thoughts, feelings, and actions. The systematic method can also give the researcher reliable knowledge about the group's internal organization and ranking, for instance who a street gang's leader and lieutenants are. Above all, emphasis is placed on the quite decisive difference between pair interactions of two people and group interactions of three or more people (Chapple & Arensberg, 1940;Whyte, 1941Whyte, , 1955Whyte, , 1967Whyte, [1964Whyte, ], 1993a)). Whyte (1994, pp. 63-64) develops these ideas in his autobiography:
In determining patterns on informal leadership, the observation of pair events provided inadequate data. At the extremes, one could distinguish between an order and a request, but between those extremes it was difficult to determine objectively who was influencing whom. In contrast, the observation of set events provided infallible evidence of patterns of influence. The leader was not always the one to propose an activity, although he often did. In a group, where a stable informal structure has evolved, a follower may often propose an activity, but we do not observe that activity taking place unless the leader expresses agreement or makes some move to start the activity. [ . . . ] This proposition on the structure of set events seems ridiculously simple, yet I have never known it to fail in field observations. It gave me the theory and methodology I needed to discover the informal structure of street corner gangs in Boston's North End.
According to Whyte, only in observations of group interactions was it possible to learn who was the street gang's informal leader. This could be shown, for example, by the fact that two or three groups merged into a larger unit when the leader arrived. When the leader said what he thought the gang should do, the others followed. Certainly others in the group could make suggestions of what they should do, but these usually dried up if the leader disagreed. If there were more than one potential gang leader, usually the lieutenants, this was shown by the members splitting up and following their respective leaders. Whyte maintained that the internal ranking in the group determines all types of social interactions. An example was that the group's leader basically never borrowed money from persons lower in the group hierarchy, but turned primarily to leaders in other gangs, and secondarily to the lieutenants. This was a recurrent pattern that Whyte could find among the five street gangs he observed. Another illustration of how the group members' ranking was connected with group interactions is the often-mentioned bowling contest in the first chapter of SCS. The results of the contest, which was held in the end of April 1938, reflected-with two exceptions-the group's internal ranking. 15 According to Whyte (1941, p. 664), the method requires . . . precise and detailed observation of spatial positions and of the origination of action in pair and set events between members of informal groups. Such observations provide data by means of which one may chart structures-a system of mutual obligations growing out of the interactions of the members over a long period of time.
William Whyte is known mainly as an unusually acute participant observer with a sensitivity to small, subtle everyday details (Adler, Adler, & Johnson, 1992;van Maanen, 2011van Maanen, [1988]]). But in fact the observational method that he acquired from Arensberg and Chapple advocates quantitative behavioral observations of group processes. Whyte (1991) maintains, perhaps a bit unexpectedly, that "although SCS contains very few numbers, major parts of the book are based on quantification, the measurement (albeit imprecise) of observed and reported behavior" (p. 237). The behavioral scientist Chris Argyris (1952), who was a doctoral student under Whyte at Cornell University during the first half of the 1950s, concluded "In other words, Chapple and Arensberg believe, and Whyte agrees, that all feelings of individuals can be inferred from changes in their basic interaction pattern" (p. 45). Thus, Whyte made use of both participant observations and behavioral observations of group interactions. The reason why he emphasizes the use of measurable observations is probably that behavioral observations, in the United States during the 1920s-1940s, were usually regarded as scientifically objective and reliable. In this respect, Arensberg, Chapple, and Whyte were palpably influenced by Henderson's positivistic view of science. Participant observation was supposed to be more colored by the researcher's subjective interpretations. Hence Whyte drew a clear distinction between observations and interpretations of observations (Chapple & Arensberg, 1940;Whyte, 1941Whyte, , 1953Whyte, [1951Whyte, ], 1967Whyte, [1964Whyte, ], 1970Whyte, , 1982Whyte, , 1991Whyte, , 1993aWhyte, , 1994Whyte, , 1997;;Argyris, 1952). Platt (1998Platt ( [1996]], p. 251) writes
There has been little or no commentary within sociology on its [SCS] connections with obviously behaviouristic and positivistic orientations to observation and to study small groups, despite some clues given in the text.
Whyte was to have great use of Arensberg's and Chapple's method for observations of the social, criminal, and political structure in the North End. When he (1993b, p. 362) asked the Nortons who their leader was, they answered that there was no formal or informal leader, and that all the members had equally much to say about decisions.
It was only after Whyte had become accepted by the street gangs in the North End that he could make systematic observations of everyday interactions between the groups' members at the street level, and thereby reach conclusions that often contradicted the group members' own 15. Whyte, William Foote. Papers. #/4087. Kheel Center for Labor-Management Documentation and Archives, Catherwood Library, Cornell University, 14087 Box 2, Folder 32. notions about their internal ranking. Contrary to what several members of the Nortons said, he detected a very clear informal hierarchy in the group-even though the group's composition changed during the three and half years of his field work. Indeed, Whyte argues in SCS that the Nortons no longer existed in the early 1940s. One of Whyte's chief achievements in SCS was to make visible in detail the street gangs' unconscious everyday interactions and mutual obligations within and between groups. At the same time, it emerges in SCS that Whyte's main informant Doc, probably through his discussions with Whyte, increased his awareness of the Nortons' informal organization and interactions with other groups in the local community. To borrow a pair of concepts from the American sociologist Robert Merton, he thus arrived at a latent explanation that went against the street gang's manifest narratives (Whyte, 1941(Whyte, , 1993a(Whyte, , 1994(Whyte, , 1997;;Homans, 1993Homans, [1951]], pp. 156-189; Merton, 1996, p. 89).
---
THE SOCIOLOGY DEPARTMENT AT THE UNIVERSITY OF CHICAGO
As we have seen, an explicit condition of the Society of Fellows was that SCS could not be submitted as a doctoral dissertation at Harvard University. As Whyte (1994) noted, "the junior fellowship was supposed to carry such prestige that it would not be necessary to get a PhD" (p. 108). When he realized, after his intensive field work in the North End, that he would nevertheless need a doctoral degree, but that he could not obtain a doctoral position at Harvard, he was drawn to the possibility of going to Chicago. He (1994, p. 108) wrote about why he chose to go there
The sociology department at the University of Chicago had an outstanding reputation, but that was not what attracted me. On the advice of Conrad Arensberg at Harvard, I chose Chicago so I could study with W. Lloyd Warner, who had left Harvard in 1935 after completing the fieldwork for several books that came to be known as the Yankee City series.
Since Warner was the professor in both anthropology and sociology-the only chair in both subjects after the department was divided in 1929-Whyte did not at first need to choose a main subject. But he wanted to finish his doctoral studies at Chicago as quickly as possible, and decided that it would take longer to study anthropological courses, such as archaeology and physical anthropology, than sociological ones, such as family studies and criminology. Before settling on South Dorchester Avenue in Chicago during the autumn of 1940, Whyte had been influenced at Harvard by leading researchers who were not primarily based in sociology, although Henderson lectured for the Society about Vilfredo Pareto's economically impregnated sociology. At Chicago, the only sociologist who really influenced him was Everett C. Hughes (Whyte, 1970(Whyte, , 1991(Whyte, , 1994(Whyte, , 1997)).
---
THE ORGANIZED SLUM
When William F. Whyte researched and lectured at Chicago during the period 1940-1948, 16 a tense intradisciplinary antagonism existed there between Everett C. Hughes and W. Lloyd Warner, on the one hand, and Herbert Blumer and Louis Wirth, on the other. The background to this antagonism was that Blumer/Wirth did not think that Hughes/Warner maintained a high enough scientific level in their empirical studies, while the later thought that 16. The exception was the academic year 1942-1943 when Whyte did research at the University of Oklahoma (Whyte, 1984, p. 15;Gale Reference Team, 2002). JOURNAL OF THE HISTORY OF THE BEHAVIORAL SCIENCES DOI 10.1002/jhbs Blumer/Wirth only talked without conducting any empirical research of their own (Abbott, 1999). Whyte eventually found himself in the Hughes/Warner camp. As a result of this conflict, and of Warner not being able to attend his doctoral disputation, Whyte felt uncertain how it would turn out. At the dissertation defense he received hard criticism from Wirth for not having defined the slum as disorganized, and for not having referred to previous slum studies in sociology such as Wirth's own sociological classic-The Ghetto (1998 [1928]). But Whyte argued that SCS would be published without bothersome footnotes containing references, and without an obligatory introduction surveying the literature of earlier slum studies. He also gradually perceived that SCS would have been a weaker and more biased study if he had read earlier sociological slum studies before beginning his field work, since they would have given him a distorted picture of the slum as disorganized from the viewpoint of middle-class values (Whyte, 1967(Whyte, [1964(Whyte, ], 1970(Whyte, , 1982(Whyte, , 1984(Whyte, , 1991(Whyte, , 1993a(Whyte, , 1994(Whyte, , 1997)). Whyte (1967Whyte ( [1964]], p. 258) described how he avoided this pitfall:
The social anthropologists, and particularly Conrad M. Arensberg, taught me that one should approach an unfamiliar community such as Cornerville as if studying another society altogether. This meant withholding moral judgements and concentrating on observing and recording what went on in the community and how the people themselves explained events.
Whyte's social-anthropological schooling from Harvard made him a somewhat odd bird for certain sociologists at Chicago. Furthermore, it shows that an alternative preunderstanding can enable the researcher to view the studied phenomenon in an at least partly new light. After several fruitless attempts by Wirth to get Whyte to define the slum as disorganized, Hughes intervened, who also sat in the degree committee. He said that the department would approve SCS as a doctoral dissertation on the condition that Whyte wrote a survey of the literature of earlier slum studies. The survey would thereafter be bound together with the rest of the text and placed in the University of Chicago's library. Once Whyte had published two articles titled "Social Organization in the Slums" (1943b) and "Instruction and Research: A Challenge to Political Scientists" (1943a), Hughes persuaded the sociology department that these did not need to be bound with SCS (Whyte, 1984(Whyte, , 1991(Whyte, , 1993a(Whyte, , 1993b(Whyte, , 1994(Whyte, , 1997)).
The concept of disorganization was fundamental to the Chicago school's urban sociology, for its view of the group's adaptation to city life and the individual's role in the group. This concept had first been introduced by William I. Thomas and Florian Znaniecki in the book that became their milestone, The Polish Peasant (1958Peasant ( [1918Peasant ( -1920]]). It subsequently became an accepted perspective on migrants' process of integration into urban social life in the Chicago school's studies during the 1920s, 1930s, and 1940s. When Whyte argued in his study that the slum was organized for the people who resided and lived there, he touched a sore spot in the Chicago school's urban sociology, which had held for decades that the slum lacked organization. Whyte made an important empirical discovery when, with great insight and precision, he described the internal social organizations of the Nortons and Italian Community Club, whereas the Chicago school had unreflectively presupposed such groups' lack of internal social structure. According to Whyte (1993b), the North End's problem was not "lack of organization but failure of its own social organization to mesh with the structure of the society around it" (p. 273). 17 17. See also Edwin H. Sutherland's (1944) review of SCS in American Journal of Sociology and R. Lincoln Keiser (1979Keiser ( [1969]]) for a similar critique of how certain sociologists regarded Afro-American street gangs in economically impoverished areas during the 1960s.
The Chicago school came to use the concept of disorganization mainly in two ways. The first, based on Thomas and Znaniecki's definition, was an explanation for how the Polish peasants, and other groups in the transnational migration from the European countryside to the metropolis of Chicago, went through three phases of integration: organized, disorganized, and finally reorganized. The second way of using the concept was a later modification of the first. Chicago sociologists such as Roderick D. McKenzie and Harvey W. Zorbaugh described the groups who lived in the slum as permanently disorganized. The difference between the two viewpoints was that Thomas and Znaniecki emphasized that the great majority of the Polish peasants would gradually adapt to their new homeland, while McKenzie and Zorbaugh-who were strongly influenced by the human-ecological urban theory of Robert E. Park and Ernest W. Burgess-considered the slum as disorganized regardless of which group was involved or how long it had lived in Chicago. Thomas was also to write about young female prostitutes in The Unadjusted Girl (1969Girl ( [1923]]) where he alternated between the two viewpoints. McKenzie and Zorbaugh proceeded more faithfully from Burgess' division of the city into five concentric zones, and their manner of using the concept of disorganization became the accepted one in this school (Whyte, 1943b(Whyte, , 1967(Whyte, [1964]]); Ciacci, 1968;Andersson, 2007). 18 It was only Thomas and Znaniecki who made a full transnational migration study in Chicago. The other studies mostly took their starting point in the slum after the migrant had arrived in Chicago. The sociologist Michael Burawoy (2000) maintains, somewhat simplistically, that "the Chicago School shrank this global ethnography into local ethnography, and from there it disappeared into the interiors of organizations" (p. 33).
An excellent example of a local monograph during the school's later period was the quantitative study Mental Disorders in Urban Areas (1965 [1939]) by Robert E. L. Faris and H. Warren Dunham, which concluded that the highest concentration of schizophrenia occurred in disorganized slum areas. Thus, the use of the concept in the Chicago school had gone from explaining migrants' transnational transition, from countryside to big city, to solely defining the slum and its inhabitants as disorganized.
In his article "Social Organization in the Slums" (1943b), Whyte criticized the Chicago researchers McKenzie, Zorbaugh, Thomas, and Znaniecki for being too orthodox when they see the slum as disorganized and do not realize that there can be other agents of socialization than the family, such as the street gang and organized crime. In contrast, Whyte thinks that other Chicago sociologists-like John Landesco, Clifford R. Shaw, and Frederic M. Thrasher-have found in their research that the slum is organized for its inhabitants. Whyte emphasized in his article that it is a matter of an outsider versus an insider perspective. Some Chicago researchers have an outsider perspective nourished by American middle-class values, while others show greater knowledge about the slum population's social worlds. Whyte (1967Whyte ( [1964]], p. 257) developed this idea in the mid-1960s:
The middle-class normative view gives us part of the explanation for the long neglect of social organization in the slums, but it is hardly the whole story. Some sociologists saw slums in this way because they were always in the position of outsiders.
Rather surprisingly, Whyte in the above-cited article (1943b) argued that the Chicago sociologists had different views of disorganization even though all of the sociologists he named, besides Thomas and Znaniecki, had shared Park and Burgess as supervisors and mentors. Moreover, he does not mention that Chicago researchers during the 1920s, 1930s, 18. Whyte, William Foote. Papers. #/4087. Kheel Center for Labor-Management Documentation and Archives, Catherwood Library, Cornell University, 14087 Box 2, Folder 8. JOURNAL OF THE HISTORY OF THE BEHAVIORAL SCIENCES DOI 10.1002/jhbs and 1940s used the concept of disorganization in an at least partly different way than the original one of Thomas and Znaniecki.
It is also worth noting that Whyte's article did not examine the Chicago monograph which perhaps most closely resembles his own: Nels Anderson's The Hobo (1961 [1923]). Anderson too, in his intensive field study of Chicago's homeless men in the natural area Hobohemia, concluded that the slum was organized. Both Anderson and Whyte made use of participant observation to illuminate groups' complex social worlds. Although Anderson does not give a concrete description of a few groups of homeless men corresponding to what Whyte gives for the street gangs in the North End, they both find that an insider perspective has a decisive importance for understanding the slum residents' multifaceted social worlds.
When Whyte compared his own study with Thrasher's (1963Thrasher's ( [1927]]), which deals with child and teenage gangs in Chicago's slum districts, there emerge some of the most important differences between his approach and that of the Chicago school. While Whyte (1941, pp. 648-649) gave a dense description of five street gangs, the Chicago school strove-with natural science as a model-to draw generally valid conclusions about groups, institutions, life styles, and city districts: It [SCS] differs from Thrasher's gang studies in several respects. He was dealing with young boys, few of them beyond their early teens. While my subjects called themselves corner boys, they were all grown men, most of them in their twenties, and some in their thirties. He studied the gang from the standpoint of juvenile delinquency and crime. While some of the men I observed were engaged in illegal activities, I was not interested in crime as such; instead, I was interested in studying the nature of clique behavior, regardless of whether or not the clique was connected with criminal activity. While Thrasher gathered extensive material upon 1,313 gangs, I made an intensive and detailed study of 5 gangs on the basis of personal observation, intimate acquaintance, and participation in their activities for an extended period of time.
What chiefly emerges in the quotation is that, whereas Whyte's main aim was to describe precisely the daily interactions in and between five street gangs and the surrounding local community, Thrasher's general objective was to make a social survey of all the street gangs in Chicago, even though the study does not show how well he succeeded with his grand ambition. Of course, the Chicago school's research projects during the 1920s, 1930s, and 1940s differed as regards their efforts to use sociological categories. For instance, Anderson's The Hobo is not as permeated by the urban sociological perspective of Park and Burgess as is Zorbaugh's The Gold Coast and the Slum.
Neither the Chicago researchers nor Whyte seem to be fully aware that they used the concept of disorganization in different ways. Thus, they mixed together an ethnic group's transnational migration process and internally organized urban communities, on the one hand, with abiding social problems, such as homelessness, criminality, prostitution, schizophrenia, suicide, and youth gangs, in slum areas, on the other hand. A complementary explanation could be that Chicago concepts were sometimes betrayed by Chicago observations (Hannerz, 1980, p. 40). Whyte adopted a more relativistic cultural attitude toward people who lived in the slum, and thinks that the middle class's formal organizations and societies should not be considered "better" than the street gangs' and organized crime's informal organizations and networks inside and outside the slums. While the Chicago school, aided by the research results from its monographs, tried to find general patterns in migrant groups' adaptation to the new living conditions in the metropolis, Whyte's purpose was to expose in detail the street gangs' social, criminal, and political organization. These constituted alternative career paths for the slum inhabitants, and were connected not least with the formal and informal political structure on both the municipal and national levels.
If we can get to know these people intimately and understand the relations between little guy and little guy, big shot and little guy, and big shot and big shot, then we know how Cornerville society is organized. On the basis of that knowledge it becomes possible to explain people's loyalties and the significance of political and racket activities. (Whyte, 1993b, p. xx) That Whyte employed a structural-functional explanatory model for how the different parts of the North End cohere in a larger unity is not a coincidence, as I will clarify later. At the same time, it is essential to notice that when Whyte made his study, the earlier optimism that characterized most of the Chicago school's studies had given way to a more pessimistic outlook on the future as a result of the Depression that pervaded American society in the 1930s.
---
SOCIAL ANTHROPOLOGY AND SOCIOLOGY AT THE UNIVERSITIES OF CHICAGO AND HARVARD
It was a historical fluke that William Whyte became a grantee at the Society of Fellows two years after Arensberg gained admittance. Due to the social-anthropological schooling that Whyte acquired at Harvard University, he was able throughout his 86-year life to argue against researchers who wanted to place him in the Chicago school's realm of thought. Besides criticizing certain Chicago sociologists' description of the slum as socially disorganized, he came to be included, through his mentors Arensberg and Warner, in Radcliffe-Brown's research ambitions of a worldwide comparative sociology. 19 In 1944, Radcliffe-Brown (1976 [1958], p. 100) wrote about this grand research project: Ethnographical field studies are generally confined to the pre-literate peoples. In the last ten years, field studies by social anthropologists have been carried out on a town in Massachusetts, a town in Mississippi, a French Canadian community, County Clare in Ireland, villages in Japan and China. Such studies of communities in "civilized" countries, carried out by trained investigators, will play an increasingly large part in the social anthropology of the future.
While it is highly probable that Radcliffe-Brown refers to Newburyport rather than to Boston as the town in Massachusetts where social anthropologists had conducted extensive field studies, my argument is that SCS can also be placed in this tradition. Warner (1941a) also wrote in line with Radcliffe-Brown in the early 1940s that social anthropology's field studies of modern society "must in time be fitted into a larger framework of all societies; they must become a part of a general comparative sociology (p. 786)." Warner, who had also been taught by Robert H. Lowie at Berkeley, got to know Radcliffe-Brown in connection with his field work in Australia on the Murngin people during 1926-1929 (Warner, 1964(Warner, [1937]]). When Whyte came to Chicago in 1940, the departments of anthropology and sociology had conducted crosscultural comparative sociology with social-anthropological field methods since at least the end of the 1920s. It can certainly be argued that Thomas and Znaniecki's as well as Anderson's sociological field studies, The Polish Peasant and The Hobo, respectively, constitute the real origin of this research tradition-but it is perhaps more correct to point out Robert Redfield's (1971 [1935]), as the first two anthropological studies in the tradition. At the same time as these two studies of Mexico and Sicily, respectively, belonged to an incipient social-anthropological tradition with strong sociological influences, they were remarkable exceptions since most of the anthropological field studies in the 1920s and 1930s were conducted within the borders of the United States (Warner, 1968(Warner, [1940]]; Eggan, 1971;Peace, 2004, p. 68).
When the departments of sociology and anthropology at the University of Chicago were divided in 1929, the previously close collaboration between the two disciplines took a partly new form and orientation. Before the division, the idea was that the sociologists would take care of anthropology at home, while the anthropologists would investigate immigrants' cultural background in their native countries. Fay-Cooper Cole, the head of the anthropology department, wrote in a grant application in 1928:
It is our desire to continue such studies but we believe that there is also a field of immediate practical value in which ethnological technique can be of special service -that is in the study of our alien peoples. Most of our attempts to absorb or Americanize these alien groups have been carried on without adequate knowledge of their backgrounds, of their social, economic, or mental life in the homelands. It is our hope to prepare high grade students for these background studies, and to make their results available to all social workers. We have recently made such a study of one district in Mexico, as a contribution to the study of the Mexican in Chicago. We have a similar study in prospect of the Sicilian. However these investigations are of such importance that we should have ten investigators at work where we now have one. 20 The researchers in this comparative project would learn how the native countries' cultures were related to the immigrant groups' capacity for adaptation in Chicago and other American cities. For example, Redfield made a field study in Chicago of how the Mexican immigrants had managed to adapt from rural to urban life, before he began his field work on Tepoztlan. 21 The Chicago researchers thought that adaptive dilemmas of ethnic migrant groups could be mitigated if the city's welfare organizations and facilities had better and deeper background knowledge about the migrant's cultural patterns. The obvious social utility of such a crosscultural research project would be that the United States and Chicago could make social efforts specifically adjusted to each newly arrived ethnic group. The field studies of Mexicans and Sicilians, referred to by Cole in the quotation above, were the first two anthropological studies in the almost symbiotic collaborative project between anthropology and sociology. It was crowned in the mid-1940s with Horace R. Cayton's and St. Clair Drake's unsurpassed work Black Metropolis (1993 [1945]). Cayton and Drake, whose supervisor had been Warner, dedicated their book to Park.
A historically decisive watershed for the anthropology department at Chicago in the early 1930s was the employment of Radcliffe-Brown. As George W. Stocking (1976) noted in regard to the department's development "the more important functionalist influence, however, was that of Radcliffe-Brown, who came to Chicago on the fall of 1931, fresh from his comparative synthesis of the types of Australian social organization" (p. 26). Radcliffe-Brown was employed as a professor of anthropology at the University of Chicago during 1931-1937 and succeeded Edward Sapir, who in the autumn of 1931 took over an advantageous 20. University of Chicago, Special Collections Research Center of Joseph Regenstein Library, Presidents ' Papers 1925-1945, Box 108, Folder 9. 21. University of Chicago, Special Collections Research Center of Joseph Regenstein Library, Robert Redfield Papers 1925-1958, Box 59, Folder 2. professorship in the new anthropology department at Yale University (Darnell, 1986, p. 167;Stocking, 1999Stocking, [1995]]). Besides in regard to the importance of conducting intensive field studies, Radcliffe-Brown and Sapir had different views in several respects about the subject area and orientation of anthropology. While Sapir was a linguist, and schooled in historicism by the father of American anthropology, Franz Boas, it was Durkheim's comparative sociology that inspired Radcliffe-Brown's theories. Boas and his students were mainly occupied with historical and contemporary documentation of disappearing Indian cultures in the United States (Boas, 1982(Boas, [1940]]; Stocking, 1999Stocking, [1995], pp. 298-366)], pp. 298-366). Radcliffe-Brown (1976[1958]) defined social anthropology as a natural science whose primary task "lies in actual (experimental) observation of existing social systems" (p. 102). In other words, he was more eager to document the present than salvage the past.
When Warner was employed in 1935 by both the anthropological and sociological departments at Chicago, Radcliffe-Brown's cross-cultural comparative sociology could be implemented in various research projects. But Warner had already begun, in collaboration with Mayo, the Yankee City Series at Harvard University in the early 1930s. Redfield was also to have great importance for a deeper cooperation between anthropology and sociology at Chicago. Strongly influenced by his father-in-law Robert E. Park's sociology, he had emphasized in his doctoral dissertation that anthropologists should devote themselves less to pre-Columbian archaeology and folklore, and focus more on contemporary comparative scientific cultural studies. Moreover, Park described Radcliffe-Brown as a sociologist who primarily happened to be interested in aboriginal peoples (Stocking, 1979, p. 21). In addition to Warner and Redfield, Hughes also contributed in a crucial way to deepening and enlivening the cooperation between the anthropology and sociology departments even after their division in 1929. It is in this research landscape that I want to place Whyte's SCS.
Paradoxically, the employment of Radcliffe-Brown in 1931 and Warner 1935 meant that the anthropology department became more sociologically oriented than before the division in 1929. The reason was partly that Radcliffe-Brown replaced Sapir, and partly that Redfield and Warner had ever more to say while Cole had less influence after Sapir left Chicago. Furthermore, Redfield and Warner mostly shared Radcliffe-Brown's view of the subject's future development. Already from the start, as Park (1961Park ( [1923]], p. xxvi) wrote in the "Editor's Preface" to The Hobo, sociology in Chicago had the general aim of giving not as much emphasis to . . . the particular and local as the generic and universal aspects of the city and its life, and so make these studies not merely a contribution to our information but to our permanent scientific knowledge of the city as a communal type.
Against the background of this reasoning, it may be worth observing that social anthropologists and sociologists in Chicago, after the arrival of Radcliffe-Brown and Warner, developed what the Chicago school had already initiated about the diversity of urban life, although with the entire world as a field of ethnographic work. While the Chicago school's dominance in American sociology began to taper off in the mid-1930s, since Park left the department in 1934 and the sociology departments at Columbia and Harvard universities were improved, social anthropology gained in prominence.
Although Chicago sociologists, such as W. I. Thomas, could be critical of some aspects of Durkheim's comparative sociology, one can find a common historical link in Herbert Spencer's social evolutionism. Spencer, who was a notably controversial person in some academic circles, is perhaps best known for having sided with big industry against advocates of reform, as well JOURNAL OF THE HISTORY OF THE BEHAVIORAL SCIENCES DOI 10.1002/jhbs as for having coined the expression survival of the fittest. However, certain sociologists and social anthropologists were attracted to his idea of a comparative sociology. In contrast, the anthropology based on historicism with Boas in its front line was a consistent opponent of Spencer's social evolutionism as a whole (Warner, 1968(Warner, [1940]]; Voget, 1975, pp. 480-538;Radcliffe-Brown, 1976[1958], pp. 178-189;Boas, 1982Boas, [1940]]; Perrin, 1995;Stocking, 1999Stocking, [1995], pp. 305-306;], pp. 305-306;Andersson, 2007).
---
HOW RADCLIFFE-BROWN'S AND WARNER'S STRUCTURAL-FUNCTIONAL MODEL OF THOUGHT INFLUENCED WHYTE
In order to realize their grand plans for a cross-cultural comparative sociology, Radcliffe-Brown (1964[1939], p. xv) and Warner had a great need of systematic comparisons between different cultural forms, grounded in intensive field studies of particular societies:
What is required for social anthropology is a knowledge of how individual men, women, and children live within a given social structure. It is only in the everyday life of individuals and their behavior in relation to one another that the functioning of social institutions can be directly observed. Hence the kind of research that is most important is the close study for many months of a community which is sufficiently limited in size to permit all the details of its life to be examined. SCS met Radcliffe-Brown's and Warner's high requirements for a long-term intensive social-anthropological field study of the structural function of social institutions (street gangs, organized crime, police corps, and political machinery) in a particular community. As in the above-mentioned community studies, the concrete research results of SCS were essential empirical facts that could make Radcliffe-Brown's and Warner's cross-cultural comparative sociology more than just groundless speculation about similarities and differences between the world's cultural forms. Earlier "armchair researchers," such as Edward B. Tylor, Lewis H. Morgan, William Sumner, and Herbert Spencer, had been sharply criticized by anthropologists and sociologists like Boas, Malinowski, Radcliffe-Brown, and Thomas for not having enough empirical data to verify their evolutionary theories about distinct cultures' universal origin and progress (Stocking, 1992;McGee & Warms, 2004[1996]; Andersson, 2007).
There are more points of contact than the historical connection between SCS and Radcliffe-Brown's and Warner's research ambitions for a cross-cultural comparative sociology. Whyte (1993b, p. 272) draws a structural-functional conclusion in SCS:
The corner gang, the racket and police organization, the political organization, and now the social structure have all been described and analyzed in terms of a hierarchy of personal relations based upon a system of reciprocal obligations. These are the fundamental elements out of which all Cornerville institutions are constructed. 22 Whyte's explanation, for how the concrete social structure in the North End is functionally and hierarchically linked together in a larger social system, lies completely in line with Radcliffe-Brown's 1952, p. 181) explanation of the same process on a more general level:
22. Whyte also emphasizes in the manuscript "Outline for Exploration in Cornerville" from July 17, 1940 that "the main purpose is to examine the functioning of various groups in the community in order to gain an understanding of human interactions which may be applied in other communities, in other studies" (Whyte, William By the definition here offered 'function' is the contribution which a partial activity makes to the total activity of which it is a part. The function of a particular social usage is the contribution it makes to the total social life as the functioning of the total social system. Such a view implies that a social system (the total social structure of a society together with the totality of social usages in which that structure appears and on which it depends for its continued existence) has a certain kind of unity, which we may speak of as a functional unity. We may define it as a condition in which all parts of the social system work together with a sufficient degree of harmony or internal consistency, i.e. without producing persistent conflicts which can neither be resolved nor regulated. Warner (1941a, p. 790), too, explains on a general level how the social structure functionally, and not least hierarchically, coheres in a larger social system:
Once the system of rank has been determined, it becomes important to know the social mechanisms which contributed to its maintenance. There arise concomitant problems of how the different social structures fit into the total system. Whyte (1955, p. 358) explains on the basis of similar structural-functional ideas how the different institutions or organizations and leaders in the North End are functionally and hierarchically connected in a larger social system: Although I could not cover all Cornerville, I was building up the structure and functioning of the community through intensive examination of some of its parts-in action. I was relating the parts together through observing events between groups and between group leaders and the members of the larger institutional structures (of politics and the rackets). I was seeking to build a sociology based upon observed interpersonal events. That, to me, is the chief methodological and theoretical meaning of Street Corner Society.
Although I argue consistently that SCS was part of Radcliffe-Brown's and Warner's crosscultural research project, it is equally important to stress that Whyte demonstrated both norm conflicts between groups in the North End (such as corner boys and college boys) and a different social organization than what existed in the surrounding majority society (Whyte, 1967(Whyte, [1964], p. 257;], p. 257;Lindner, 1998). Nevertheless, Whyte (1993b, p. 138) maintains through a structural-functional explanatory model that emphasizes consensus between groups that the main function of the police in Boston was not to intervene against crime, but to regulate the street gangs' criminal activities in relation to the surrounding society's predominant norm system:
On the one side are the "good people" of Eastern City [Boston], who have written their moral judgments into the law and demand through their newspapers that the law be enforced. On the other side are the people of Cornerville, who have different standards and have built up an organization whose perpetuation depends upon freedom to violate the law. Vidich (1992), however, holds that "There is no evidence in Whyte's report that he used anyone else's conceptual apparatus as a framework for his descriptive analysis of Cornerville" (p. 87). A plausible explanation for his interpretation is that the study is primarily a detailed and particularly concrete description of the street gangs' organization. It is necessary to have comprehensive knowledge about the history of the subjects of anthropology and sociology in order to trace the connection in the history of ideas between SCS and Radcliffe-Brown's structural functionalism (Argyris, 1952, p. 66). Further direct support for my claim is that Whyte (1967Whyte ( [1964] ] called a structural-functional approach. It argues that you cannot properly understand structure unless you observe the functioning of the organization" (p. 265).
---
WHYTE'S POSITION IN THE HISTORY OF SOCIAL ANTHROPOLOGY AND SOCIOLOGY
The purpose of the accompanying diagram is to chart Whyte's position in social anthropology and sociology at the universities of Chicago and Harvard. As I have argued, the Diagram shows that Whyte did not have his disciplinary home among the Chicago sociologists, even though he ascribes some importance to Hughes. At the same time, I would emphasize that the diagram primarily includes the intellectual influences and lines of thought that were embodied in social anthropologists and sociologists who were active at those universities when Whyte wrote SCS during the 1930s and 1940s. Whyte was very probably influenced by other research colleagues and thinkers, mainly after he left Chicago in 1948. Certain influential thinkers, such as Durkheim, Radcliffe-Brown, and Park, had an indirect impact on Whyte, while other researchers like Warner, Arensberg, and Hughes exerted direct personal influences. My inclusion of researchers who did not influence Whyte is intended to give the reader a wider picture of the intellectual atmosphere and research landscape that prevailed at this time.
The reason why no line has been drawn from Radcliffe-Brown to Whyte is that this influence went chiefly via Arensberg and Warner. Like all diagrams, mine builds upon various necessary simplifications; for instance, I have excluded a number of influential persons, such as Burgess, Chapple, Henderson, and Mayo. Although I show only one-way directions of influence, apart from the case of colleagues where no clear direction existed, there was naturally a mutual influence between several of the researchers. The influence between some researchers, such as Warner, Arensberg, and Whyte, was, however, stronger than that among others, for example, Hughes and Whyte. Moreover, I have chosen to include Spencer in the diagram, despite the strong criticism by researchers like Durkheim and Thomas in certain respects of his evolutionary laissez-faire and speculative racial doctrine. Neither should Spencer be perceived as a social-scientific forefather to the research traditions in the diagram. On the other hand, I would maintain that an idea-historical line of thought-largely originating in Spencer's evolutionism-united the scientific ambitions of Durkheim/Radcliffe-Brown/Warner as well as of Thomas/Park to make comparisons between different cultures and social types based on empirical research (Voget, 1975, pp. 480-538;Perrin, 1995;Stocking, 1999Stocking, [1995]], pp. 305-306). Warner (1968Warner ( [1940]], p. xii) therefore claims: Some modern anthropologists have come to realize that the diverse communities of the world can be classified in a range of varying degrees of simplicity and complexity, much as animal organisms have been classified, and that our understanding of each group will be greatly enhanced by our knowledge of its comparative position among the social systems of the world.
In spite of its unavoidable limitations, the diagram contains a wealth of names to display the idea-historical relationship between Radcliffe-Brown's and Warner's cross-cultural research project and leading Chicago sociologists, such as Thomas and Park. The socialanthropological education that Whyte got from Arensberg and Warner at the universities of Chicago and Harvard had its historical origins in Durkheim's and Radcliffe-Brown's comparative and structural-functional sociology. As in the case of the Chicago school, Radcliffe-Brown's and Warner's research project did not last long enough to make it possible to reach any scientifically verifiable conclusions about similarities and differences between cultures. Nonetheless, Warner in the study Yankee City Series achieved path-breaking research results about American society, such as the crucial importance of social class affiliation, for the ability to pursue a formal professional career (Warner, 1962(Warner, [1953]], 1968 [1940]).
---
CONCLUSION
Social scientists, such as Martin Bulmer (1986Bulmer ( [1984]], Anthony Oberschall (1972), and George W. Stocking (1982Stocking ( [1968]]) have shown the importance of placing the researchers and their ideas in a historical context. In textbooks and historical overviews, there is often a tendency to include researchers and ideas within anachronistic themes that do not take sufficient consideration of the colleagues and the departments where the researchers were educated. However, Whyte and SCS have often been placed into an anachronistic context or there has been a tendency to take insufficient consideration of his colleagues and the department where the primary research was carried out. All of these studies assume that the SCS is part of the tradition that is today called the Chicago school of sociology (Klein, 1971;Jermier, 1991;Schwartz, 1991;Boelen, 1992;Thornton, 1997).
Instead, there are other social-anthropological studies that belong to the same comparative research tradition as SCS. For example, Horace Miner's St. Denis (1963[1939]); John F. Embree's Suye Mura (1964[1939]); Conrad M. Arensberg's and Solon T. Kimball's Family and Community in Ireland;Edward H. Spicer's Pascua (1984[1940]); and Allison Davis, Burleigh B. Gardner, andMary R. Gardner's Deep South (1965 [1941]). Even Everett C. Hughes' French Canada in Transition (1963[1943]) might be argued to lie in the research field's margin. With the exception of Arensberg and Hughes, the reason that none of the other JOURNAL OF THE HISTORY OF THE BEHAVIORAL SCIENCES DOI 10.1002/jhbs researchers are included in the diagram above is that Whyte did not, according to my findings from archival studies, correspond with them, know them personally, or was directly influenced by any of them while doing research at Harvard or Chicago. During this same time period, that is, late 1930s and early 1940s, these researchers were doing fieldwork at such different geographical locations as Canada, Japan, Ireland, and United States.
Not only did Whyte come to the ground-breaking conclusion that the slum is informally organized, he also conducted participant observations for a longer period of time than any one before him had done in an urban context. An equally important discovery was the understanding of street gang's internal structure and informal leadership. It was not until Whyte dropped the idea, after 18 months of intensive field work, that he would conduct a comprehensive community study, where the Middletown studies and Yankee City Series served as models, that the group structure of street gangs became the main focus of his study. Whyte came to this conclusion by interconnecting Arensberg's and Chapple's observation method with participant observations of several bowling matches in the fall of 1937 and spring of 1938. In the often-mentioned bowling match in April 1938, where most of the Nortons (corner boys) came to settle who was the greatest bowler, the results of the match coincides with a few exceptions, with the group's hierarchical structure. George C. Homans came to use Whyte's meticulous observations during 18 months of Nortons' everyday practices as a case in The Human Group (1993Group ( [1951]]). Using five ethnographic case studies, Homans aimed to reach universal hypotheses about norms, rank, and leadership in primary groups. He (1993He ( [1951]]) claims that after a number of detailed field studies of primary groups during the interwar period, 1919-1938, there was a need for sociological generalization of "the small group" (p. 3). The book, therefore, had a twofold purpose; specifically, "to study the small group as an interesting subject in itself, but also, in so doing, to reach a new sociological synthesis" (Homans, 1993(Homans, [1951]], p. 6). Whyte and Homans had been research colleagues at the Society of Fellows. Consequently, it is probably no coincidence that both, however at different times, became interested in primary groups' formal and informal organization. When Homans (1993Homans ( [1951]]) generalizes Whyte's ethnographic observations of the corner boys' internal structure, he also at the same time changes the concept of status to rank because he "wants a word that refers to one kind only" (p. 179). With "one kind," Homans (1993Homans ( [1951]]) means that status has a sociologically multifaceted meaning that refers to the person's social practice and position in a social network; while the concept of rank more clearly refers to larger organizations, such as companies or the military with a "pyramid of command" (p. 186). Because of this, Whyte's complex ethnographic discoveries of street gangs' social organization and mutual obligations to organized crime, police corps, and political machinery became reduced, in Homans theoretical study, to a question regarding the internal chain of command among the Nortons. Despite this shortcomings, Homans (1993Homans ( [1951]]) argued that the intention was to develop "a theory neither more nor less complex than the facts it subsumes" (p. 16). At the same time, it is difficult to disregard the fact that both Homans' The Human Group and SCS in various ways were pioneering contributions in the creation of the research field "the small group." However, it is worth noting that in The Human Group, there is an incipient conceptual change from the concept of the primary group to the small group, which from Homans' point of view probably marks a generational shift in sociological research, although it is basically about the same social phenomenon.
Finally, I have argued that William Foote Whyte's social-anthropological schooling at Harvard was crucial to his, at the time, path-breaking conclusion that the North End had an informal well-functioning social organization. If Whyte instead had been educated in sociology at the University of Chicago, he would also have had a preunderstanding of the slum as being socially disorganized. Social anthropologists Radcliffe-Brown, Warner, and Arensberg passed on to Whyte the "paradigm" to view the slums (North End) as socially organized, and not socially disorganized as the majority of American sociologists claimed (Warner, 1941a;Gibbs, 1964). The fact that this debate is still relevant, at least in the United States, is shown by researchers, such as Philippe Bourgois (2003Bourgois ( [1995]]) and especially Loïc Wacquant (2008), who are very critical of those who define poor urban neighborhoods as disorganized, while researchers like Robert J. Sampson (2012) argue that the perspective continues to have relevance (pp. 36-39). | 79,377 | 868 |
4a0680a792c2ca98babc112894670e2a2ffef20d | To troll or not to troll: Young adults’ anti-social behaviour on social media | 2,023 | [
"JournalArticle",
"Review"
] | Online anti-social behaviour is on the rise, reducing the perceived benefits of social media in society and causing a number of negative outcomes. This research focuses on the factors associated with young adults being perpetrators of anti-social behaviour when using social media.Based on an online survey of university students in Canada (n = 359), we used PLS-SEM to create a model and test the associations between four factors (online disinhibition, motivations for cyber-aggression, self-esteem, and empathy) and the likelihood of being a perpetrator of online anti-social behaviour.The model shows positive associations between two appetitive motives for cyber-aggression (namely recreation and reward) and being a perpetrator. This finding indicates that young adults engage in online anti-social behaviour for fun and social approval. The model also shows a negative association between cognitive empathy and being a perpetrator, which indicates that perpetrators may be engaging in online anti-social behaviour because they do not understand how their targets feel. | Introduction
Anti-social behaviour on social media, such as harassment and bullying, is on the rise [1]. This trend has intensified since the beginning of the COVID-19 pandemic in 2020, when much social communication moved to online spaces [2][3][4]. Online anti-social behaviour can lead to several negative outcomes, such as decreasing an individual's satisfaction with technologies and being online in general [5] to causing mental and emotional stress in victims [6].
Consequently, those at the receiving end of online anti-social behaviour (such as people who experience online harassment) may adopt coping strategies that can further isolate them [7].
In this study, we use the term "online anti-social behaviour" to encompass a range of harmful acts, including trolling (the intentional provocation of others through inflammatory online comments), bullying (aggressive behavior towards an individual or group), and harassment (offensive or abusive conduct directed at others) that have a negative impact, causing harm or distress to individuals or communities [8][9][10]. While bullying and harassment are related concepts, bullying is often defined as repeated aggressive behavior, typically by someone who perceives themselves to have more power over someone else [11]. Harassment, on the other hand, is a broader concept that includes any unwanted, offensive, or abusive conduct towards others.
While many studies on anti-social behaviour have focused on children and adolescents [12-16, for example], there is limited research focusing on young adults. Importantly, young adults are more likely than any other age group to report experiencing online harassment [1] and other forms of anti-social behaviour, especially during the COVID-19 restrictions [4]. Young adults are also generally more active online, particularly in Canada [17]. As such, the research focuses on university students.
This research focuses on the perpetrators of anti-social behaviour on social media and asks: What factors are associated with young adults being perpetrators of anti-social behaviour when using social media? The contributions of this research are twofold. First, most previous research has examined the intrinsic and extrinsic characteristics of people targeted by perpetrators of anti-social behaviour [see 1,5,6,18]. Consequently, there is less understanding of what motivates perpetrators. Second, among the studies that focused on perpetrators, many looked at one or a few factors associated with the perpetration of anti-social behaviour [19][20][21][22][23]. Building on the previous scholarship, this research identifies and evaluates a more comprehensive model to understand psychological, social, and technology-associated factors related to being a perpetrator of online anti-social behaviour. Specifically, the proposed model incorporates the following factors known in the literature, but not necessarily tested together: online disinhibition, motivations for cyber-aggression, self-esteem, and empathy.
---
Literature review
While social media can provide rewarding social connections for many, it can also be a space where users face anti-social behaviour. A recent study identified that 41% of Americans have personally experienced some form of online harassment or abuse; people who experienced online anti-social behaviour cited they were potentially targeted because of their political views, gender, race, ethnicity, religion and sexual orientation [1].
Anti-social behaviour is not a phenomenon exclusive to the internet; psychologists have widely analyzed anti-social behaviour in other contexts for several years prior to the widespread adoption of the internet [10]. The increased use of online platforms has contributed to the exponential rise of online anti-social behaviour [24,25], which has, consequently, reduced the perceived benefits and promise of social media in society [26]. Recently, the increasing reliance on online platforms due to the COVID-19 pandemic restrictions has also been linked to the rise of anti-social behaviour [3,4], perhaps because people have been spending more time on social media [2].
Online anti-social behaviour has several negative outcomes. First, it can reduce online participation, which is particularly impactful for minorities and marginalized communities. Lumsden and Harmer [27] identified that online anti-social behaviour is another avenue of disenfranchisement and discrimination for equity-deserving and marginalized communities, impacting their status, legitimation, and participation in online spaces. Second, previous research has shown that the effect of anti-social behaviour goes beyond the targets and also includes bystanders. Duggan [6] reported that 27% of Americans decided not to share something online after witnessing the abuse and harassment of others. Together, these negative effects of online anti-social behaviour can reduce the diversity of voices on social media and make people uncomfortable going online [28]. Third, online anti-social behaviour can have profound effects on individual's emotional feelings, their reputation and personal safety [6].
While the effects of anti-social behaviour have been well documented, previous research is less clear on what makes someone engage in such behaviour towards another person online. To explain the prevalence of anti-social behaviour on social media and in public discourse, Hannan [29] revisited Neil Postman's [30] theory about how the entertainment frame, which identifies the need for all information to be entertaining, has influenced public discourse. Focusing on television broadcasts in the last century, Postman [30] warned that the entertainment frame has seeped into education, journalism, and politics, which has changed how people interact with one another and society. In a society driven by an entertainment frame, individuals begin to expect all interactions to be entertaining, which influences behaviour and the boundaries of what communication is deemed acceptable. While Postman was writing about television, his theoretical lens has been effectively employed to understand social media [29]. Hannan [29] argued that, like television turned public discourse into "show business" the preeminence of online platforms has turned the online public sphere into a sort of "high school". Trolling on social media has become mainstream as a new genre of public speech, which shapes the discourse and the practices of politicians, public figures, and citizens.
To understand how the entertainment frame relates to a person's likelihood to engage in online anti-social network, we developed a conceptual framework. The following section describes our conceptual framework which seeks to explain what makes someone engage in anti-social behaviour on social media. Specifically, we describe the factors and formulate a model of the drivers of the perpetration of online anti-social behaviour.
---
Conceptual framework and research hypotheses
---
Cyber-aggression
Since the goal of this research is to identify factors associated with being a perpetrator of antisocial behaviour on social media, Shapka's and Maghsoudi's [31] concept of cyber-aggression is applied. Instead of employing a binary classification and directly asking participants whether they consider themselves to be perpetrators or victims, the main dependent variable is the cyber-aggression construct. This construct assesses the level of people's engagement in behaviour frequently associated with being a perpetrator, such as making hurtful comments about somebody's race, ethnicity or sexual orientation, purposely excluding a certain person or group of people, and posting embarrassing photos or videos of someone else.
---
Online disinhibition
Online disinhibition refers to the phenomenon when people say or do something online that they would not normally do in a face-to-face setting [32]. Suler [32] attributes this effect to six factors: [1] dissociative anonymity, as it is harder to determine who online people are; [2] invisibility, as people often cannot see each other online; [3] asynchronicity, as online communication does not require the sender and receiver to be co-present online for messages to be sent; [4] solipsistic introjection, as people tend to assign voices and other visual elements to whom they interact with due to the absence of face-to-face cues; [5] dissociative imagination, as some people can imagine separate dimensions from the real world when interacting online; and [6] minimization of status and authority, as people may perceive more of a peer-relationship as everyone "starts off on a level playing field" (p. 324) and therefore may be more willing to misbehave. Benign disinhibition refers to the effect when these factors motivate people to engage in positive interactions online. On the other hand, toxic disinhibition refers to when these factors motivate people to propagate hate and violence [32].
This study focuses on the association between online disinhibition and perpetration of online anti-social behaviour, as online disinhibition is linked to a higher likelihood of sharing harmful content [33]. Research suggests that use of social media enhances online disinhibition leading to anti-social behaviour [9]. Research has identified a positive association between online disinhibition and being a perpetrator of cyber-aggression [33][34][35]. In particular, Udris [35] separately analyzed the two dimensions of online disinhibition (i.e., benign disinhibition, and toxic disinhibition) and found that both positively predicted being a perpetrator. Wachs et al. [36] and Wachs and Wright [37] similarly found a positive association between the toxic dimension of online disinhibition and online hate. Building on this work, we propose the following hypothesis: H1. Online disinhibition is positively associated with being a perpetrator of cyber-aggression. (Benign and toxic disinhibition are tested separately.)
---
Motivations for cyber-aggression
Runions et al. [38] proposed a model to explore aggression motives based on the Quadripartite Violence Typology. This typology explores two dimensions: motivational valence and self-control. The motivational valence is aversive when the aggressive action of an individual is the reaction to violence or provocation. The motivational valence is appetitive when the motivation for one's aggressive behaviour is to seek an exciting experience or some kind of reward. In summary, while aversive motivational valence is reactive, appetitive motivational valence is proactive. The self-control of aggressive actions might be impulsive or controlled depending on the deliberation and how it was planned. Based on the combination of the two dimensions, there are four distinct motivations for cyber-aggression: impulsive-aversive (Rage), controlled-aversive (Revenge), controlled-appetitive (Reward), and impulsive-appetitive (Recreation) [38]. Runions et al. [38] identified that all four motivations for cyber-aggression (i.e., Rage, Revenge, Reward, and Recreation) predicted being a cyber-aggression perpetrator. In terms of specific domains and different anti-social behaviours, Gudjonsson and Sigurdsson [20] found that excitement (Recreation) was a commonly endorsed motive for offending others. Ko ¨nig et al. [23] found that victims of traditional bullying that engaged in cyberbullying tend to do it for revenge. Similarly, Fluck [14] identified that bullies indicate that their reason for engaging in cyber-aggression was mostly revenge, but also sadism attributed to fun experiences (Recreation) was mentioned for some bullies. Sadism was also found to be associated with online trolling, which indicates that trolls engage in anti-social behaviour for fun and enjoyment [39]. Thus, we expect that: H2. The motivations for cyber-aggression are positively associated with being a perpetrator of cyber-aggression. (Each of the four motivations for cyber-aggression are tested separately).
---
Self-esteem
Self-esteem refers to the perception one has towards the self [40,41]. Self-esteem is usually viewed as a two dimensional construct: self-confidence and self-deprecation. Self-confidence refers to the positive attitudes towards the self. Self-deprecation focuses on negative perceptions towards the self. It is important to analyze the influence of self-esteem on cyberaggression because self-esteem has been traditionally associated with offline anti-social behaviour, such as bullying [40]. Among research that explored the association between self-esteem and cyber-aggression, Rodrı ´guez-Hidalgo et al. [15] found that self-deprecation was positively associated with being a perpetrator, but found nonsignificant associations between self-confidence and being a perpetrator. Other studies combined self-confidence and self-deprecation into a single construct of self-esteem (reverse-scoring items related to self-deprecation) and identified that lower levels of self-esteem lead to a higher likelihood of being a cyber-aggression perpetrator [40,42]. Aligned with the prior work, we hypothesize that: H3. Self-esteem is negatively associated with being a perpetrator of cyber-aggression. (Selfconfidence and self-deprecation are assessed separately).
---
Empathy
Empathy refers to the ability to experience and comprehend other people's emotions and consists of two dimensions: the affective dimension (i.e., how one experiences the emotions of others) and the cognitive dimension (i.e., the capacity to comprehend the emotions of others) [43]. Empathy is relevant to understanding the motivations of anti-social behaviour because the capacity to experience and understand the emotions of others often leads to positive social interactions, such as helping others and sharing positive emotions and thoughts [12,21]. In contrast, a lack of empathy may lead to negative social interactions. Ang and Goh [12] found that both cognitive and affective empathy negatively predicted being a perpetrator of cyber-aggression. Jolliffe and Farrington [21] analyzed the influence of empathy in bullying among adolescents and found mixed results: both cognitive and affective empathy were negatively associated with bullying among boys, and only affective empathy was negatively associated with bullying among girls (the authors note that the low numbers of girls involved in bullying could have prevented cognitive empathy from reaching statistical significance). Casas et al. [44] analyzed empathy as a unidimensional construct (combining both cognitive and affective empathy) and found that low empathy leads to higher cyber-aggression perpetration. Other studies using adapted various scales to measure empathy found similar results [15,22,45].
In a systematic review, van Noorden et al. [16] identified that: (1) most studies reported a negative association between cognitive empathy and being a cyber-aggression perpetrator (although a few studies did not find any significant association or found a positive association), and (2) most studies reported a negative association between affective empathy and being a cyber-aggression perpetrator (with a few studies finding no association). Thus, we propose the following hypotheses:
---
H4. Empathy is negatively associated with being a perpetrator of cyber-aggression. (Cogni-
tive and affective empathy are assessed separately).
Table 1 provides a summary of the research hypotheses. To identify factors associated with perpetration of anti-social behaviour, the scales included in the model have specific dimensions that can provide more granular results. Therefore, the model includes detailed scales to analyze how each factor is associated with being a perpetrator of online anti-social behaviour.
---
Methods
Prior to data collection, the study received approval from the Research Ethics Boards at both Toronto Metropolitan University and Royal Roads University (at the time of the study, the authors were affiliated with one of these institutions). Undergraduate students at Toronto Metropolitan University who signed up for the Student Research Participant Pool were invited to voluntarily participate in an online survey. The Student Research Participant Pool invites students to voluntarily participate in scholarly research and receive extra course credit that can be applied to specific courses.
Before taking the survey, participants were required to review and agree to the informed consent form before starting the survey which was hosted on Qualtrics, an online platform. Students were given the opportunity to review and save the consent form on their own devices. They were also able to withdraw from the survey at any time by simply closing their browser. In such cases, their data was not used in the study. As this was an online survey, students had the flexibility to complete it at their own pace and from any location of their choosing.
In total, 557 students participated in the survey between March 9 and April 18, 2022. The survey dataset was cleaned and the data was completely anonymized. A two-step disqualification process was used to assure the high quality of the data. First, an attention check question was employed to identify participants who were not carefully reading the questions, which resulted in the removal of 182 responses who answered the question incorrectly. Second, responses from participants who completed the survey in less than 5 minutes, which indicates that they did not carefully read the questions (n = 16), were removed. We did not exclude responses that took longer than expected because some students may have opened the survey page but completed it at a later time. After data cleaning, the final dataset consisted of 359 participants. On average, respondents completed the survey in 25 minutes, and the median completion time was 13 minutes, which was aligned with the anticipated completion time in the piloted survey. The final dataset is available at doi.org/10.6084/m9.figshare.22185994.
Partial Least Squares Structural Equation Modeling (PLS-SEM) was used to analyze the data. PLS-SEM is a non-parametric approach that can handle complex models and can be used to test relationships between multiple independent and dependent variables simultaneously [46,47]. This method has been widely used in several fields, such as business, political communication, and psychology [48,49], and more recently internet studies [50][51][52]. SmartPLS v. 3.3.9 software was used to analyze the association between the constructs below.
---
Factors Hypotheses
---
Online disinhibition
H1a. Benign online disinhibition is positively associated with being a perpetrator of cyber-aggression.
H1b. Toxic online disinhibition is positively associated with being a perpetrator of cyber-aggression.
---
Motives for cyberaggression
H2a. Rage is positively associated with being a perpetrator of cyber-aggression.
H2b. Revenge is positively associated with being a perpetrator of cyber-aggression.
H2c. Reward is positively associated with being a perpetrator of cyber-aggression.
H2d. Recreation is positively associated with being a perpetrator of cyber-aggression.
---
Self-esteem
H3a. Self-deprecation is positively associated with being a perpetrator of cyberaggression.
H3b. Self-confidence is negatively associated with being a perpetrator of cyberaggression.
---
Empathy
H4a. Cognitive empathy is negatively associated with being a perpetrator of cyberaggression.
H4b. Affective empathy is negatively associated with being a perpetrator of cyberaggression.
https://doi.org/10.1371/journal.pone.0284374.t001
---
Measurement scales
The scales used in the online survey have been tested and validated by previous research. All constructs were measured using a 5-point Likert scale ranging from "strongly disagree" to "strongly agree," except for the measurement of being a perpetrator of online anti-social behaviour, which was measured using a 5-point Likert scale ranging from "never" to "always." S1 Appendix outlines the constructs and scales used in the research. Based on the previous applications of these scales, all were modeled as reflective constructs in the PLS-SEM analysis. Cyber-aggression was measured using the Cyber-aggression and Cyber-victimization Scale [31]. While this scale has two components: cyber-aggression and cyber-victimization, only the former was used in our research (CAVP) due to the focus on perpetrators of anti-social behaviour. The scale included twelve indicators with statements about how individuals behave toward others online, such as "posted or re-posted something embarrassing or mean about another person." This scale is particularly useful because it focuses on cyber-aggressive behaviour overall (i.e., specific acts associated with cyber-aggression). This scale overcomes a limitation of previous scales that focused on specific online platforms (e.g., Facebook) or modes of communicating (e.g., computers or cellphones) [31].
The Online Disinhibition Scale [35] was used to measure benign disinhibition (BOD) and toxic disinhibition (TOD). Benign disinhibition was measured by seven indicators and toxic disinhibition was measured by four indicators.
To measure the four motivations for cyber-aggression, an adapted version of the Cyber-Aggression Typology Questionnaire [25] was used. In Antipina et al.'s [13] adaptation, each motive (i.e., Rage, Revenge, Reward, and Recreation) was measured by five indicators.
To evaluate the levels of empathy of respondents, the Rosenberg's Self-Esteem Scale [41] was used whereby the two dimensions of self-esteem were separately explored. Self-confidence (RSEC) and self-deprecation (RSED) were each measured by five indicators.
The Basic Empathy Scale [43] was used to explore cognitive empathy (BCE) and affective empathy (BAE). Cognitive empathy was measured by nine indicators and affective empathy was measured by eleven indicators.
Table 2 provides descriptive data of the constructs in our dataset.
---
Constructs and model assessments
Current PLS-SEM guidelines were followed to assess the reliability of the constructs, the validity of the model, and to report the results [47,53]. The following procedures for the constructs and model assessments were used: internal consistency, discriminant validity, collinearity between indicators, and significance and relevance of the structural model. We identified issues of internal consistency in five constructs: Affective Empathy (BAE), Cognitive Empathy (BCE), Benign Online Disinhibition (BOD), and Self-Deprecation (RSED). Additionally, we identified indicators with low outer loadings for Toxic Online Disinhibition (TOC). To solve these issues, we removed indicators with loadings below 0.6. Although the ideal threshold is 0.7, a threshold of 0.6 is acceptable for exploratory research [53]. We decided to use the 0.6 threshold for outer loadings because the more conservative 0.7 threshold would cause the Cronbach's alpha for BOD to go below the minimum acceptable value of 0.6. After excluding six BAE indicators, five BCE indicators, four BOD indicators, two TOD indicators, and two RSED indicators, values of composite reliability were well above the minimum of 0.6, and values of Average Variance Extracted (AVE) were above the minimum of 0.5 for all constructions. Cronbach's alpha values were above the ideal 0.7 for most constructs, except for BOD and BCE, which were above the minimum acceptable of 0.6. In total, we removed 26% of the indicators, which is within acceptable limits for exploratory research [54]. We have verified that the majority of constructs (excluding toxic online disinhibition) were assessed using at least three items, which is considered ideal for statistical identification of the construct [54]. Table 3 details the internal consistency values, while Table 4 displays the loadings of the indicators.
We also identified one discriminant validity issue. The HTMT correlation between Rage and Revenge was above 0.95, which suggests that both constructs were not empirically distinct from each other in the model. Therefore, we decided to combine the two constructs into one, since both focus on aversive motives for cyber-aggression [25,38]. This approach is aligned with prior research on the motivational valence of cyber-aggression [55]. After creating a single construct for aversive motives (Rage and Revenge), no other discriminant validity issues were identified (see Table 5). There were no collinearity issues in the data, as VIF values were below 0.5 for all indicators.
Values of path (β) coefficients, F 2 , and R 2 were considered to measure the relevance of the model, while bootstrapping was used to test the significance of the associations between constructs.
---
Results
The analysis of the model (see Fig 1) shows a moderate positive and significant association between reward and being a perpetrator (β = 0.292), and between recreation and being a perpetrator (β = 0.290), which supports H2c and H2d. The analysis also indicates a weak but significant negative association between cognitive empathy and being a perpetrator (β = -0.110), which supports H4a. No other construct had a significant association with being a perpetrator of cyber-aggression. Table 6 provides detailed information about which hypotheses were supported by the results. The assessment of effect sizes shows small effect size of reward and recreation on being a perpetrator (both f 2 = 0.043), and near negligible effect size of cognitive empathy on being a perpetrator (f 2 = 0.014).
In terms of model assessment and explanatory power, the model shows a moderate predictive power (adj. R 2 = 0.352) and the SRMR indicates a good model fit (0.057 for the saturated model and for the estimated model). The blindfolding procedure with a distance omission of 7 returns positive values of Q 2 = 0.202, which confirms the predictive relevance of the model.
---
Discussion
In the model, the findings suggest that recreation and reward are two important constructs to understand the perpetration of online anti-social behaviour. In the context of our research, this indicates that appetitive motives for anti-social behaviour (i.e., when the aggression is proactive) are more important than aversive motives (i.e., rage and revenge), in which the aggression is a reaction to another situation. Our findings are consistent with studies that focused on online trolls [39] and young offenders on probation [20], and contrary to studies that focused on bullying and cyber-bullying [14,23]. While online trolls and young people on probation indicate that they engage in online anti-social behaviour for fun, enjoyment, and excitement (related to appetitive motives), bullies and cyber-bullies tend to indicate revenge as their main reason. Therefore, young people in our sample engaging in anti-social behaviour might be seeking excitement and aiming to obtain positive emotions or social status [25,38]. In this sense, self-control, which distinguishes recreation (impulsive) from reward (controlled) does not seem to play a significant role in the likelihood of young people engaging in anti-social behaviour.
---
Factors
Hypotheses Results
---
Online disinhibition
H1a. Benign online disinhibition is positively associated with being a perpetrator of cyber-aggression.
---
Not supported
H1b. Toxic online disinhibition is positively associated with being a perpetrator of cyber-aggression.
---
Not supported
---
Motives for cyberaggression
H2a. Rage is positively associated with being a perpetrator of cyberaggression.
---
Not supported
H2b. Revenge is positively associated with being a perpetrator of cyberaggression.
---
Not supported
H2c. Reward is positively associated with being a perpetrator of cyberaggression.
---
Supported
H2d. Recreation is positively associated with being a perpetrator of cyberaggression.
---
Supported
Self-esteem H3a. Self-deprecation is positively associated with being a perpetrator of cyber-aggression.
---
Not supported
H3b. Self-confidence is negatively associated with being a perpetrator of cyber-aggression.
---
Not supported
---
Empathy
H4a. Cognitive empathy is negatively associated with being a perpetrator of cyber-aggression.
---
Supported
H4b. Affective empathy is negatively associated with being a perpetrator of cyber-aggression.
Not supported https://doi.org/10.1371/journal.pone.0284374.t006
A previous study that explored the role of different motivations in online and offline aggression [19] found that recreation was more prevalent in online environments, which is aligned with our findings. Graf et al. [19] suggest that recreation may be prevalent online because this motivation is generally associated with less interpersonal motives. On the other hand, Graf et al. [19] identified that reward was more prevalent in the offline context, especially because this motivation is generally associated with social dynamics such as group affiliation and power relations [18,19,56]. Therefore, perpetrators seeking rewards often prefer offline environments because they have more control over the bystanders and how they will shape social structure as a consequence of their acts [19]. Facing the COVID-19 pandemic, young people have been spending more time online, reducing the access to in-person activities in which they could have engaged in anti-social behaviour for reward purposes. This could explain why reward was identified as a prevalent reason for young people to engage in online anti-social behaviour; they had to adapt how they interact with others in a context that was heavily dependent on online platforms for social interactions.
The data generally supports both Postman's [30] theory of the entertainment frame and how it was later modernized by Hannan [29]. Specifically, we found that university students engage in anti-social behaviour both for fun (i.e., recreation) and social approval (i.e., reward). Perpetrators of anti-social behaviour on social media are doing so because it is entertaining. While recreation is strongly associated with the original theory and the centrality of entertainment in public discourse, reward emerges as particularly important when the theory was revisited by Hannan [29] to account for how social media affected the public discourse, making trolling a central feature of social interactions that emulate a high school setting.
In addition to reward and recreation, the model shows that cognitive empathy is also a factor associated with the perpetration of online anti-social behavior. Those with lower cognitive empathy, indicating a lower capacity to comprehend the emotions of others, are more likely to engage in such behavior. This suggests that perpetrators may be engaging in online anti-social behavior because they do not fully understand how their targets feel. Based on this finding, one potential strategy for reducing the prevalence of online anti-social behavior is to implement psychological interventions that highlight the negative effects of the behavior on the targets.
Interestingly, other factors showed nonsignificant associations with cyber-aggression perpetration. The fact that both benign and toxic online disinhibition had nonsignificant associations with perpetration indicates that characteristics of online platforms (e.g., anonymity and asynchronicity) and perceptions of social norms in online interactions (e.g., minimization of status and authority) do not play a significant role in online anti-social behaviour among university students. Although studies and reports indicated that the prevalence of online antisocial acts (such as online harassment and cyber-bullying) increased during the pandemic [2][3][4], our results indicate that the spike in online anti-social behaviour is less about online disinhibition and more about how most social interactions moved to the online environment. Instead of being a consequence of the online environment, anti-social behaviour is more likely motivated by the need for social approval, group bonding, fun and excitement (as indicated by the positive associations with reward and recreation).
There were no significant associations between any dimensions of self-esteem (i.e., self-confidence and self-deprecation) and being a perpetrator. Therefore, the results do not support findings from previous studies that identified an association between self-esteem and perpetration [15,40,42]. Our data suggests that one's perception towards the self is not a key factor of being a perpetrator, at least not among the studied population.
In summary, this study provides evidence on why young adults, particularly university students, engage in anti-social behavior. By highlighting the association between engagement in anti-social behavior and social factors such as enjoyment and social approval, our study presents a direction for future research to further analyzehow social elements play a role in antisocial behavior. While engagement in various forms of anti-social behavior is frequently linked to psychological traits, we found cognitive empathy to be the only significant factor among our study participants. In particular, a lower ability to understand how targets feel may be fueling the desire for fun and social approval without regard for the consequences. Future studies can further explore the relationship between these constructs.
---
Conclusion
The research sought to identify the factors associated with the perpetration of anti-social behaviour. We developed a model to account for the role of online disinhibition, motivations for cyber-aggression, self-esteem, and empathy in the perpetration of online anti-social behaviour.
The findings suggest that three factors are associated with the perpetration of online antisocial behaviour: recreation, reward and cognitive empathy. Both recreation and reward are appetitive motives for anti-social behaviour, which suggests that young people engage in online anti-social behaviour for fun, excitement, and social approval. Cognitive empathy was negatively associated with the perpetration of online anti-social behaviour, which suggests that perpetrators have lower capacity to comprehend the emotions of others. Perpetrators have a lower understanding of how their targets might feel and this could partly explain why they engage in online anti-social behaviour.
Other factors showed nonsignificant associations with perpetration. Interestingly, both benign and toxic disinhibition had nonsignificant associations with perpetration, which indicates that the prevalence of online anti-social behaviour is less about the nature of the medium (e.g., anonymity, asynchronicity) and more about individuals involved.
Building on the results, there are two potential strategies in mitigating anti-social behaviour. First, related to our findings that perpetrators are more likely to be motivated by recreation and reward and have lower cognitive empathy, we refer to earlier work by Jolliffe and Farrington [21] who found that making people think about their actions increases their awareness and builds empathy towards the target. In this regard, strategies such as Twitter's intervention to add friction to make people reconsider when posting potentially offensive content [57] might be a strategy to reduce anti-social behaviour on social media. These types of strategies may be useful both in terms of making people think about their targets and potentially understand how they might feel (cognitive empathy), and reducing impulsive anti-social acts (recreation). For example, a recent survey of Twitter users who had posts removed by the platform found that less than 2% of them posted something to intentionally hurt someone [58].
Second, while outside the scope of the current study, Kim et al. [59] found that showing basic community guidelines to users can also encourage individuals to engage in healthier discussions, reducing the number of problematic content that was reported by others. This suggests that in addition to introducing some friction into online communication, platforms should endeavour to include more education in highlighting community rules and norms set by a given platform or an online community. This way, newcomers to the platform would learn what is and is not acceptable behaviour in a given community from the beginning. While this idea is not new, various communities on Reddit have already adopted this approach; most larger social media platforms tend to develop long, jargon-ridden guidelines of community norms, which are then buried in the fine print and are not seen or read by users [60]. Katsaros et al. [58] found that one in five users who violated Twitter's rules has never read the platform's guidelines on appropriate behaviour, and of those who have read the rules, over half of them were merely somewhat familiar or less familiar with them.
As with any empirical work, the research has several limitations that stimulate future research in this area. Since this study relies on a sample of undergraduate students from one urban university in Canada, our sample is only representative of this group of young adults. Future studies could expand the work by using different and/or larger samples, such as nationally representative samples of adults. The reliability of some scales were also below the expected threshold, an issue that was solved by following the current PLS-SEM procedures. Therefore, future studies can revalidate some of these scales by using larger and/or more diverse samples.
---
The anonymized dataset is available via the following DOI: 10.6084/ m9.figshare.22185994.
---
Supporting information S1 Appendix. Constructs. (DOCX)
---
Author Contributions
Conceptualization: Felipe Bonow Soares, Anatoliy Gruzd, Jenna Jacobson, Jaigris Hodson.
Data curation: Felipe Bonow Soares, Anatoliy Gruzd.
---
Formal analysis: Felipe Bonow Soares.
Funding acquisition: Anatoliy Gruzd, Jenna Jacobson, Jaigris Hodson.
Methodology: Felipe Bonow Soares, Anatoliy Gruzd, Jenna Jacobson, Jaigris Hodson.
---
Project administration: Anatoliy Gruzd.
Writing -original draft: Felipe Bonow Soares, Anatoliy Gruzd.
---
Writing -review & editing:
Felipe Bonow Soares, Anatoliy Gruzd, Jenna Jacobson, Jaigris
Hodson. | 38,363 | 1,075 |
1743bc103bfa22cf61b133c8a56ce21639971ebc | Connecting Research and Practice in TESOL: A Community of Practice Perspective | 2,015 | [
"JournalArticle"
] | In line with a growing interest in teacher research engagement in second language education, this article is an attempt to shed light on teachers' views on the relationship between teaching and practice. The data comprise semi-structured interviews with 20 teachers in England, examining their views about the divide between research and practice in their field, the reasons for the persistence of the divide between the two and their suggestions on how to bridge it. Wenger's (1998) Community of Practice (CoP) is used as a conceptual framework to analyse and interpret the data. The analysis indicates that teacher experience, learning and ownership of knowledge emerging from participation in their CoP are key players in teachers' professional practice and in the development of teacher identity. The participants construe the divide in the light of the differences they perceive between teaching and research as two different CoPs, and attribute the divide to the limited mutual engagement, absence of a joint enterprise and lack of a shared repertoire between them. Boundary encounters, institutionalised brokering and a more research-oriented teacher education provision are some of the suggestions for bringing the two communities together. | Introduction
Research in teaching English to speakers of other languages (TESOL) has generated a revived interest in encouraging teachers to engage in and with i research as part of their professional practice and development (Tavakoli & Howard, 2012;Belcher, 2007;Borg, 2009Borg, , 2010;;Borg & Liu, 2013;Ellis, 2010, Erlam, 2008;Nassaji, 2012;Wright, 2010). This interest is evidenced by the increasing number of research articles, conference themes and plenary speeches dedicated to this topic to promote teacher research engagement (Borg, 2011;Ellis, 2010;Kumaravadivelu, 2011). Despite all the research interest and notwithstanding the repeated call for further research in this area (Borg;2010;De Vries & Pieters, 2007;Korthagen, 2007;McIntyre, 2005), there is little evidence to demonstrate that TESOL teachers engage with research as part of their day to day practice or that adequate attention is paid to examining and analysing this limited engagement. Conscious of the divide between the two and cautious of the dangers associated with it, many researchers have highlighted the sensitivity of the divide by calling it "a perennial problem" (Korthagen, 2007: 303), defining it "a damaging split between researchers and teachers" (Allwright, 2005:27), and describing it as "already a significant and perhaps growing divide between research and pedagogy in our field" (Belcher, 2007: 397). The gap between research and practice is commonly acknowledged across different educational disciplines from science to language education (Biesta, 2007;Korthagen, 2007;Pieters & de Vries, 2007;Vanderlinde & van Braak, 2010) suggesting that the problem might be more widespread than documented and "may well be an endemic feature of the field of education" (Biesta, 2007: 295).
While emerging rapidly as a global line of enquiry, there is neither sufficient empirical evidence nor adequate disciplinary effort to examine and highlight the underlying problems that help increase the divide (Biesta, 2007;Ellis, 2010;Korthagen, 2007;Nassaji, 2012). Korthagan (2007: 303) argues that given the recurrent nature of the problem and with more and more teachers, parents and politicians voicing dissatisfaction with the divide, it is necessary "to restart an in-depth analysis of the relation between educational research and educational practice". Borg (2010: 421) argues that our understanding of teacher research engagement is limited "with the levels of practical and empirical interest in this research area being minimal". Borg observes that the scope and depth of the available evidence on language teacher research clearly indicates that "teacher research remains largely a minority activity in the field of language teaching" (Borg, 2010: 391). The current paper is responding to the call for further research in this area. By providing an in-depth analysis of teachers' views and beliefs about the relationship between research and practice, the paper is an attempt to help enhance our understanding of teachers' perspectives on why they do or do not engage with research and what they suggest can be done to help improve the situation.
---
Background Theory
---
Teaching and Research
Before discussing the relationship between teaching and research in more detail, and against a backdrop of the disagreement among researchers and teachers about what research is, it is necessary to provide a working definition for research. Following from Dornyei (2007), for the purpose of the current study research is defined as conducting one's own databased investigation which involves collecting and analysing the data, interpreting the findings and drawing conclusions from it. The interest in encouraging TESOL teachers to engage with research can be traced back to Chastain (1976) and Stern (1982). In educational research the underlying assumption is that teachers who are engaged with research in their practice deliver a better quality of teaching. Williams and Coles (2003) argue that the ability to seek out, evaluate and integrate appropriate evidence from research and innovation is an important aspect of effective development in professional practice. Borg (2010: 391) reports that "research engagement is commonly recommended to language teachers as a potentially productive form of professional development and a source of improved professional practice". Teacher research is also promoted as it is known to encourage teacher autonomy, improve teaching and learning processes and empower teachers in their professional capacity (Allwright, 2005;Borg, 2010;Burns, 1999;McKay, 2009).
A brief overview of research in this area provides a list of factors contributing to the divide between teaching and practice. Pennycook (1994) interprets the divide in terms of incommensurability of discourses, and Wallace (1991) attributes it to researchers and practitioners being different people coming from different worlds. Freeman and Johnson (1998: 399) report that lack of a deep understanding and appreciation of teacher knowledge is a main issue, and argue that "research knowledge does not articulate easily and cogently into classroom practice". Non-collaborative school cultures, limited resources and limitations in teachers' skills and knowledge to do research are some of the other barriers reported in the literature (See Borg 2010 for a detailed account). Analysing the existing divide between research and practice, Ellis (2010: 2) argues that the nexus between research and practice in second language education has changed over the past years since the field "has increasingly sought to establish itself as an academic discipline in its own right". Drawing on the literature in TESOL and Applied Linguistics, Ellis (2010) reports that there is no consensus about the relationship between research and teaching, and that the relationship continues to remain a complex and multifaceted nexus of sometimes conflicting positions on whether or not the research findings are applicable to teaching.
In a recent article, Richards (2010) calls for a better understanding of what constitute the nature of language teaching competence and performance and sets a 10-item core dimensions framework as the agenda for gaining insight into the necessary skills and expertise in language education. An important dimension that can shed light on the competenceperformance relationship, according to Richards, is 'theorizing from practice', i.e. "reflecting on our practices in order to better understand the nature of language teaching and learning and to arrive at explanations or hypotheses about them" ( (Richards, 2010: 121). Richards (2010) further argues that membership of a community of practice is a core dimension that can provide a rich opportunity for teacher further professional engagement and development.
Interestingly, Richards labelling the call 'a somewhat ambitious agenda' (p. 120) suggests that achieving this understanding might be more challenging and formidable than it is perceived.
In a study examining TESOL teachers' views on the relationship between teaching and research in England, Tavakoli and Howard (2012) reporting the findings of 60 questionnaires claimed that, regardless of the context the teachers worked in or the amount of experience they had, the majority of TESOL teachers were not engaged with research and were sceptical about the practicality and relevance of research to their professional practice. It is necessary to note that while teachers in the context of this study, i.e. England, did not mention action research as a research activity they were engaged with, action research is sometimes reported as a popular research activity in other educational contexts (Burns, 2005;Richards, 2010).
The findings of Tavakoli and Howard (2012) were confirmed by Nassaji's (2012) study examining 201 TESOL teachers' views in Canada and Turkey about the relationship between teaching and research. Another interesting finding emerging from both studies is that the teachers who had some research training in their studies, e.g. those who had done a Masters' degree, had a more favourable towards the relationship between research and practice. Stenhouse's Curriculum project (1975) has been one of the first movements to bridge the divide between educational research and practice in mainstream education in the UK. In this project, Stenhouse introduced a new approach to mainstream teaching in which an active role for teachers in developing research and curriculum in their teaching was promoted. In TESOL, such efforts are more recent. Allwright's work on promoting Exploratory Practice (2003,2005) and Burns' innovative work advocating action research (1999,2005) have been influential initiatives to raise teacher awareness and to encourage teacher research engagement. Although promoting action research, i.e. research conducted by teachers to gain a better understanding of their practice and to improve teaching and learning, has attracted attention among teachers and gained currency among researchers, the findings of recent research (e.g. Nassaji, 2012;Tavakoli & Howard, 202) suggest that it is still not widely practiced by teachers around the world. At an organisational level, TESOL Quarterly's commitment to 'publishing manuscripts that contribute to bridging theory and practice in our profession', and ELT Journal's mission to link 'everyday concerns of practitioners with insights gained from relevant academic disciplines' are examples of attempts to connect TESOL research and practice. Recent plenary speeches about the divide (Ellis, 2010;Kumaravadivelu, 2011) and major publications on language teacher research engagement (Borg, 2010;Ellis, 2010Ellis, , 2013) ) are other strategies for linking the two.
---
Efforts to Bridge the Divide
The contribution of teacher education to the development of teacher research engagement is worth examining. Freeman and Johnson (1998) were among the first to suggest it was the responsibility of teacher education to link research to practice in second language education. Wright ( 2009) attributes a significant role to teacher education in defining and disseminating new ideas to teachers, and McKay (2009) considers introducing teachers to classroom research a challenge worth investigating. Overall, while there is a degree of awareness about the usefulness of research knowledge for and its positive impact on practice, there is insufficient evidence to indicate whether this awareness is transferred into action in teacher education and whether teacher education is effectively used as an opportunity to promote research (Borg, 2010;Kiely & Askham, 2012;Wright, 2009Wright, , 2010)).
---
TESOL Teacher Education
TESOL teacher education in the UK can be divided to two levels of initial (pre-service) and further (in-service) teacher training programs. An initial TESOL qualification, e.g. CELTA, is a certificate level qualification which has historically been a major point of entry to TESOL profession in the UK and some other countries (Kiely & Askham, 2012). This trend is recently changing with an increasing number of employers requiring more advance qualifications, e.g. a Diploma or an MA. The certificate level teacher training programs are for graduates with little or no teaching experience (Cambridge English, 2013), and are typically intensive 4-week courses providing the skills, knowledge and hands-on teaching practice less experienced teachers need. The Diploma level teacher training programs, e.g. DELTA are designed for experienced teachers "to update their teaching knowledge and improve their practice" (Cambridge English, 2013). These usually span over two years parttime and act as in-service training and/or professional development. Both types of programs draw on the principles of reflective teaching (SchÖn, 1983).
The study reported here has set out to look into TESOL teachers' views on the relationship between research and practice, to examine the potential factors they believe have contributed to the persistence of the divide, and to seek out solutions from the participants on how to bridge the divide. Of particular significance to the study is to find out if Wenger's framework for communities of practice (CoP) can help answer the following research questions.
1. What are TESOL teachers' views on the relationship between teaching and research?
2. What factors do they hold responsible for contributing to the divide between research and practice?
3. What do they suggest can be done to help bridge the divide? 4. What role do they consider for teacher education in promoting teacher research engagement?
---
Analytic Framework: Wenger's (1998) Community of Practice
In similar areas of research, Wenger's (1998) CoP has proved an effective and constructive conceptual framework that allows for an in-depth insight to emerge from issues related to teachers' understanding, knowledge and learning in the context of their practice (Hasrati, 2005;Kiely & Askham, 2013;Payler & Locke, 2013;Yandell & Turvey, 2007).
Following Wenger (1998Wenger ( , 2000)), this study perceives CoPs as "groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly" (Wenger, 2006: front page). In general, CoPs are known to work in a specific Domain, have a defined Community, and exercise a specific kind of Practice. In pursuing their interest and by engaging in a series of activities such as collaborations, discussions and information sharing tasks, members of a CoP help each other, exchange experiences, develop ways of addressing and solving problems and build relationships. The interplay between social competence (shared in the CoP) and personal experience (individual's own ways of knowing) is known to result in learning and the further development of a shared competence (Wenger, 2000). The shared competence emerging from participation in the social context of the CoP helps distinguish members from non-members. Wenger (1998) points out that coherence of a CoP relies on three defining elements: mutual engagement (having a common endeavour), a joint enterprise (being involved in a collective process of negotiation), and shared repertoire (developing common resources). The concept of CoP has been critiqued by a number of researchers as being elusive and slippery, often appropriated inconsistently in different studies (Barton & Tusting, 2005;Rock, 2005). Other researchers have argued that change as an inherent property of a CoP has been neither theorised nor clearly conceptualised in Wenger's framework (Barton & Hamilton, 2005;Barton & Tusting, 2005). The study reported here provides an opportunity to examine whether adopting CoP as an analytic framework would allow for a better understanding of teachers' views on the relationship between TESOL teaching and research.
---
Methodology
---
Participants
The participants were 20 TESOL teachers teaching English in England at the time of the study. They were teaching EFL, ESOL and/or EAP courses in different organizations including university language centres, state-funded FE colleges and private language schools.
To recruit the participants, a number of English language teaching institutions in England were contacted via emails and their teachers were invited to take part in the study. The 20 participants who volunteered and took part in the interviews came from a range of different educational and professional backgrounds, and had varying training and teaching experiences.
The majority of the participants had taught English internationally as well, which is a typical characteristic of the UK TESOL teacher population. Given that Tavakoli and Howard (2012) did not find a significant correlation between years of experience or context of teaching and teacher research engagement, these variables were not included in the current study. While the study assumes that the participants belong to different CoPs, the focus of the study is on teacher practitioners as members of TESOL teachers' CoP. Table 1 presents some demographic information about the participants.
---
INSERT TABLE 1 HERE
---
Interviews
Since the two most recent studies on this topic, i.e. Tavakoli and Howard (2012) and Nassaji (2012) had drawn on questionnaire data, a semi-structured interview was considered as a methodologically more appropriate data collection tool that can make up for the limitations of previous research by providing a more open platform to the teachers to discuss their perspectives in more depth. Following from Tavakoli and Howard (2012) who found that the concept of research was open to teachers' individual interpretations, the participants were informed of the working definition of research presented earlier in this paper (see Section 2.1). The face to face interviews were conducted in a place of convenience to the teachers, each lasting 30 to 45 minutes. The purpose of the study was explained to the participants and informed consent was sought before the data were collected. All but one ii of the interviewees agreed that the interviews being digitally recorded.
The interview questions were guided by the previous research findings in this area. These questions can be divided to three sections. Drawing on the findings of Tavakoli and Howard (2012), the initial section of the interview aimed at investigating teachers' views on the relationship between teaching and research, the divide between the two and the main reasons for the persistence of the divide. Following from Ellis (2010) and Nassaji (2012), the second section of the interview invited the teachers to provide suggestions for bridging the gap.
Questions about the role of teacher education were included in the last section of the interview as a gap in our understanding of this area has already been identified (Burns & Richards, 2009;Wright, 2009Wright, , 2010)).
---
Data Analysis
The interviews were transcribed and word processed before they were subjected to a thematic analysis (Creswell, 2007). The process involved three different stages. First, the transcripts were read and coded before a number of salient themes and patterns were identified. This then lead to grouping the themes together where possible. In the second stage, in order to examine the applicability of Wenger's CoP framework, the emerging themes were compared with the different aspects and components of Wenger's CoP discussed in Section 3 of the Introduction. These themes were then put under categories of Wenger's analytic framework to find out if they can provide a response to the research questions. In the last stage, a colleague experienced in working with Wenger's CoP framework examined the data separately. Any points for discussion or disagreement between the two coders were reconsidered until agreement was achieved.
---
FINDINGS
In the section below the findings of the study are grouped together to respond to Research Questions 1 to 4 in Sections 4.1 to 4.4 respectively. These findings will reflect the researcher's interpretation of teachers' views on the different aspects of teacher research engagement, and will highlight their suggestions on what can be done to bridge the divide between teaching and research.
---
Teachers' Views on the Relationship between Research and Practice: Interdependence of Learning, Practice and Identity
Fundamental to Wenger's concept of CoP is the intimate relationship between learning, practice and identity. In the field of TESOL teacher education, it is widely accepted that learning is essentially linked to the social and cultural contexts in which it occurs (Faez & Alvero, 2012;Johnson, 2006Johnson, , 2009;;Miller, 2009) and that learning should be perceived as both a cognitive and sociocultural process (Lantolf, 2000;Lantolf & Poehner, 2008;Nasir & Cooks, 2009). From a CoP perspective, learning mainly takes place through participation in social and cultural practices and activities (Lave & Wenger 1991;Nasir & Cooks, 2009;Wenger, 2000), and is identified as a characteristic of practice and participation in the community of practitioners (Wenger, 1998). Members of a community learn from one another and from more experienced members of their CoP, and they change through the processes of interaction and learning. Identity in Wenger's framework is "a way of talking about how learning changes who we are" and how it creates "personal histories of becoming in the context of our communities" (1998: p. 5).The teachers in the current study frequently The teachers' message echoes Wenger's argument that "learning is not merely the acquisition of a body of knowledge, but a journey of the self" (2011). To gain knowledge about their practice, teachers rely on experience and participate in the activities of their CoP, 'old timers' helping 'new comers', enabling them to move from periphery to legitimate membership of the CoP.
---
Factors Contributing to the Divide: The Defining Elements of CoPs
The data analysis suggests that the participants perceive teaching and research as two different CoPs and that the membership to one may not only limit but sometimes exclude a membership to the other. The analysis also implies that multi-membership in different professional CoPs has been a continuous challenge.
T5: Researchers come from theoretical perspectives; I'm a teacher coming from sort of, well, from a teaching context, from a real teaching context. … I think um as long as the researcher hasn't been too long out of the classroom then you can rely on their research.
Wenger (1998) argues that organizing themselves around some particular area of knowledge and activity gives members of a CoP a sense of joint enterprise and identity. The joint enterprise is therefore their collective negotiated response to their experiences and practices, and it creates a sense of mutual accountability within the community. The inherent differences between the two CoPs should at least to some extent be attributed to the three cohering features of a CoP, i.e. mutual engagement, joint enterprise and shared repertoire.
T13: And having a dialog between researchers and teachers: so researchers perhaps speaking with teachers about their own interests and what the teachers are interested in and developing a conversation to bridge the gap.
The pursuit of a joint enterprise, e.g. teaching English in language classroom, over time creates resources for negotiating meaning, i.e. a shared repertoire. Teachers' shared repertoire includes ways of doing things, anecdotes and stories they exchange, resources available to them and conversations in staff common rooms. A sustained engagement in their practice enables teachers to interpret and make use of this shared repertoire. The different sets of repertoire teachers and researchers rely on may be another source of divergence between the two communities.
---
T7:
The staffroom is the best place for ideas, um I mean with all that experience why make things difficult for yourself (i.e. engage with research).
---
Bridging the Gap: Bringing the Two Communities Together
Of valuable contribution to this study is Wenger's (1998) notion of 'boundaries'. While their main function is to separate different CoPs, boundaries come to spotlight when a required type of learning motivates members to move from one CoP to another. The concept of boundaries does not imply that CoPs are impermeable or that they function in isolation.
Rather, connections can be made between CoPs through the use of 'boundary encounters' such as meetings and conversations, collaborative tasks, and sharing the artefacts used by them (Wenger, 1998). Given that boundary encounters allow for importing practices and perspectives from one CoP to another, they have a central role in bringing change to the way a community defines its own identity and practice.
T18: the main job (for the research community) then is to take research and to make it available to practitioners. It is starting the research from where the practitioners wanted.
Fundamental to success of boundary encounters is the role of brokering, "a process of introducing elements of one practice to another" (Wenger, 1998: 236). Brokers, individuals (and also institutions) who straddle different CoPs, are agents that can facilitate interaction, negotiation and other exchanges between the two CoPs. The concept of potential brokers, those who can connect the two communities, appeared to be a key message by the participants. While teachers teaching at universities' language centers were sometimes suggested as potential individual brokers, the main brokery role was attributed to mediatory organizations such as the British Council and the UK's National Research and Development Centre (NRDC).
T18: What NRDC did was to take research and to make it available to practitioners … by starting the research from where the practitioners want it. Those projects and those approaches were useful and successful.
---
Role of Teacher Education
The analysis of the data provides further evidence for a socio-cultural perspective to teacher learning and confirms the significant role of learning as participation in the context of teaching (Lave & Wenger, 1991). The teachers' views indicate that although they have found teacher education useful in providing them with the essential needs of classroom practice they concede that it is the teaching experience itself that offers them the most useful experience and a fruitful opportunity for learning.
---
T8:
Teacher training gives you the initial tools to go and teach but I think the experience you get in your first job is much much more than the CELTA would give you.
While most teachers agreed that initial teacher training programs, e.g. CELTA, do not allow for a focus on research, the more experienced teachers argued that including research training at this stage would be pointless if not counter-productive suggesting that introducing research to teacher training would only be beneficial at a more advanced stage in teachers' careers.
T15: with CELTA (there is) very little (research) because CELTA is an initial teacher training of 4 weeks where people learn how to teach and the building blocks of that.
And if you put research on top of that it's too much.
With regard to how essential research was to teachers and their professional development, the teachers' views divided. While some found it less relevant to their needs and not an essential requirement for becoming a professional teacher, others considered research as central to teachers' professional practice. Overall, there was an emphasis on the role of research training in encouraging teacher research engagement. Teachers who had taught at university level were often more positive about the value of research and suggested that the university environment had been supportive of this positive attitude. Promoting action research, doing a research-oriented Masters degree and including a stronger research component in teacher education were other suggestions for bridging the divide.
T16: so through a post-graduate, like a Masters degree you could sort of bridge the gap between research and practice, and that's perhaps how teachers have gone on to become researchers, I suppose. … it'd be through teacher trainers and director of studies that research can be passed to teachers.
---
Discussion
One of the key points the current study highlights is the complex relationship between teachers' views on teaching and research, their learning experiences and their identity as a professional teacher. The analysis suggests that teacher identity forms and develops primarily through practicing teaching and by interacting with other teachers in their CoP. This finding is in line with Freeman and Johnson's observation that learning to teach is a long-term, complex developmental process that operates through participation in the social practices and contexts of L2 teaching (1998: 402). Unlike Varghese (2006), this finding implies that regardless of their individual expectations and personal histories, the teachers demonstrate a coherent concept of CoP in defining their identity in light of their teaching experience, knowledge and learning as participation. Despite acknowledging research usefulness as an underlying assumption, the teachers argue that it is learning as and through participation in the situated contexts of their CoP that gives them the ownership of knowledge and establishes them as a legitimate participant of the teaching CoP. In this respect, while it confirms Nassaji's (2012) result on teachers' lack of interest in research engagement as one of the reasons for the divide, this finding goes further to explain that teachers' reluctance may originate from teachers' reliance on the knowledge that is owned by them as legitimate participants of the CoP.
In line with the social constructivist view of teacher learning-to-teach in context (Johnson, 2006(Johnson, , 2009;;McIntyre, 2005;Miller, 2009), the teachers in this study feel it is necessary to recognise their learning as situated social practice and to acknowledge and appreciate the different ways they construct and define knowledge. This is something that TESOL research should pay more attention to when studying the divide between research and practice.
Answering the question of how teacher knowledge is translated to identity and in what ways it leads to ownership of knowledge lies beyond the scope of this paper. However, the data indicates that, while teacher research engagement is limited, teachers remain committed to the principles of Reflective Teaching (SchÖn, 1983;Wallace, 1991) and Exploratory Practice (Allwright, 2005). Whether it is possible to follow Clark (2005) to argue that it is philosophy rather than social science that governs teaching practice is beyond the purpose of this study. What this paper can argue for is that, while research engagement seems to have a restricted impact on teachers' practice, it is imperative to find out how principles of Reflective Teaching, usually introduced to teachers during pre-service teacher education, remain embedded in teachers' professional practice in many contexts (Borg, 2010;Burton, 2009;Kiely & Askham, 2012;Miller, 2003;Wright, 2010).
To associate practice and community, the three dimensions of relation in the community, i.e. mutual engagement, shared repertoire and joint enterprise should be strengthened. One way to investigate the divide is to find out why these dimensions in each CoP are diverting from those of the other. William and Coles' (2007) findings from a survey of 312 teachers in the UK that report informal discussions with colleagues, professional magazines and newspapers, and in-service teacher education are the three most common sources of teachers' new knowledge. This is an example of the limited shared repertoire between teachers and researchers. The concept of "barriers to engagement with and in research" is not new in the literature, with scholars such as Borg (2010) and Ellis (2010) listing key obstacles that prohibit teachers from conducting research. Although the presence of these barriers cannot be denied and their impact on deepening the divide should not be undermined, the underlying problem for the limited teacher research engagement reported in the literature is more complex than the simple concept of barriers. In line with the findings of Flores (2001), the current study suggests that the impact of pre-service and initial teacher education preparing teachers for research engagement is limited. It is also known that the role of teacher education in preparing teachers for research engagement has been minimally investigated (Faez & Alvero, 2012;Kiely & Askham, 2012;Miller, 2009;Wright, 2010).
---
Concluding Remarks
Employing Wenger's (1998) CoP framework in this study has offered an insight into the complex relationship between knowledge, learning experiences and identity, and has opened up a novel way of interpreting the divide in the light of the differences between the two CoPs.
However, using this framework has undermined the role of social forces at work in the creation of CoPs, e.g. the social force that imposes on researchers a research agenda distant from teachers' practical needs (Rock, 2005). Given the dynamic nature of a CoP, it is impossible to consider or evaluate it without taking into account how the world around a CoP influences it. In the current study, however, to achieve the research aims CoPs are considered in isolation. It is necessary to note that this is a small-scale study drawing on a small set of data in England. Although many of its findings may endorse issues, dilemmas and problems previously reported in various contexts, the impact of local pedagogies (Kumaravadivelu, 2011) should not be underestimated.
There are a number of important conclusions this paper would find necessary to draw on.
First, the findings of this study strongly suggest that teachers' knowledge and experience, developing through practice in their CoP, should be acknowledged and valued more intensely by the research community. Research that is aimed for TESOL teachers should be informed by this knowledge and experience, and should be designed to address their needs and requirements. Second, there is a strong need for researchers and teachers to build joint communities and to engage in mutual activities that can bring together a research and a practical focus. In order to indicate their membership to these different but inter-connected CoPs and to help bridge the divide, teachers, researchers and mediatory communities, e.g. the British Council should take a more active role in promoting collaborative research, running joint projects and holding shared academic and educational events. Richards (2010) refers to a number of successful projects of this kind delivered in the Asian contexts. The question to ask is if such projects can be used as a model to follow in other similar contexts. The final concluding remark is to highlight the important role of teacher education programmes in enhancing a research environment and in encouraging a research approach to teaching.
Research evidence (e.g. Erlam, 2008;Wright, 2010) suggests that providing a more userfriendly approach to research combined with a supportive research environment on teacher training programmes would not only prepare teachers for a better engagement with research, but they would build confidence and lead to teacher empowerment. (2009) distinguishes between teachers' engagement in research and with research. For the purpose of this study as such distinction has not been found necessary, the term engagement with research is used consistently to represent both types of engagement.
ii In the case of the only interviewee who didn't agree for her voice to be recorded, detailed notes were taken.
---
Wenger, E. (1998). Communities of practice: Learning, meaning and identity. Cambridge: Cambridge University Press. Wenger, E. (2000). Communities of practice and social learning systems. Organization, 7( 2 | 35,171 | 1,248 |
a7343ac47d27ed622200c6b28e6762a09d0dc131 | The Social Determinants of Adverse Childhood Experiences: An Intersectional Analysis of Place, Access to Resources, and Compounding Effects | 2,022 | [
"JournalArticle"
] | Children across all races/ethnicities and income levels experience adverse childhood experiences (ACEs); however, historically excluded children and families must contend with added adversities across ecological levels and within higher-risk conditions due to systemic inequality. In this grounded theory study, the authors examined how health and social service providers (N = 81) from rural and urban counties in Tennessee provided services to low-income families, children exposed to opioids, and children of immigrants. Guided by an intersectional framework, the authors examined how rural and urban settings shaped higher risk conditions for ACEs and impeded access to resources at the individual, group, and community levels. Findings from this study identified additionally marginalized subpopulations and demonstrated how inequitable environments intersect and compound the effects of ACEs. The authors present their Intersectional Nature of ACEs Framework to showcase the relationship between high-risk conditions and sociopolitical and economic circumstances that can worsen the effects of ACEs. Ultimately, the Intersectional Nature of Aces Framework differentiates between ACEs that are consequences of social inequities and ACEs that are inflicted directly by a person. This framework better equips ACEs scholars, policymakers, and stakeholders to address the root causes of inequality and mitigate the effects of ACEs among historically excluded populations. | Introduction
Child abuse, neglect, and household dysfunction are collectively referred to as adverse childhood experiences (ACEs) and are associated with worse outcomes over a child's lifetime [1]. It is important to establish that although ACEs are not determined by a child's race, class, or gender, they are more prevalent among historically excluded populations in communities made vulnerable by poverty and scarce public resources [2,3]. Previous research findings have suggested the significant impacts of systemic inequality, as historically marginalized children, families, and communities are more likely to live in high-risk environments that compound ACEs [4,5].
While ACEs scholarship has generally emphasized the 10 traditional adverse childhood experiences, emerging studies have acknowledged the importance of ACEs related to environmental factors that may disproportionately affect marginalized children and families [2,6]. Subsequent studies have identified expanded ACEs related to environmental factors (i.e., neighborhood violence, homelessness, foster care, bullying, and racism); this research has: (1) advanced dialogue around the diversity of ACEs; (2) examined the relationship between ACEs and child demographics; and (3) demonstrated which populations of children are more likely to experience ACEs [2,7,8]. However, scholars have not critically examined how systemic inequality shapes lived realities to understand the relationship among high-risk environments, access to resources, and ACEs. In addition, researchers have not explored how intersectional experiences within high-risk environments may compound the effects of ACEs and additionally marginalize populations within and across ecological levels (i.e., individual, group, or community units of analysis).
To account for children's diverse experiences of abuse, neglect, and household dysfunction at the individual, group, and community levels, we examined urban and rural environments in which ACEs occur. We utilized 81 in-depth interviews with health and social service providers in the state of Tennessee to understand how historically excluded populations-that is, low-income families, children exposed to opioids, and children of immigrants-access resources and experience place-based challenges that raise high-risk conditions. Interview participants were at the frontlines of mitigating ACEs, playing a key role in helping families access vital resources and services [9]. Guided by a process-centered intersectionality framework [10][11][12], our grounded theory research design assumed there are various social, political, and economic inequities that perpetuate conditions of oppression among historically excluded children, families, and communities, which permitted a critical understanding of underlying issues that create high-risk environments [10,13,14].
We present the Intersectional Nature of ACEs Framework to showcase how environments shape high-risk conditions; link intersectional experiences of recognized and unrecognized individuals, groups, and populations; and have confounding effects related to ACEs. While quantitative, population-level studies can describe the existence of an ACE or multiple ACEs, our study identifies the underlying issues that construct high-risk environments and worsen ACEs for children, families, and communities.
---
Background
We utilized the concept of intersectionality to guide our review of the ACEs literature and identify the extent to which empirical studies have included systemic inequality across ecological levels. Intersectionality is a theoretical framework embedded in research studies that seek to support a nuanced understanding of how various forms of experienced inequality interface with one another and exacerbate marginalization among historically excluded populations. This positionality challenges single-axis frameworks and supports the ability to understand within-group differences at the individual (micro), group (mezzo), or community (macro) level [15][16][17]. In reviewing the ACEs literature, we sought to understand how previous studies have operationalized intersectionality and the extent to which findings have considered additionally marginalized subpopulations within and across ecological levels. Our literature review identifies 20 studies that espoused an intersectional framework directly or indirectly to examine the ACEs phenomena among historically excluded populations.
---
Expanded ACEs and Historically Excluded Populations
To date, ACEs scholarship has identified the following experiences as forms of expanded ACEs: neighborhood violence, witnessing violence, bullying, poverty, homelessness, and foster care [8,[18][19][20][21][22][23]. Although studies investigating expanded ACEs have focused on how children interface with environments, most have not described upstream factors that raise risk for ACEs and contribute to experiences of co-occurring ACEs among children who experience the consequences of systemic inequality. For example, two studies that identified expanded ACEs utilized the differential exposure hypothesis to contextualize the examination of which groups or populations were more likely to be exposed to ACEs per gender, economic status, and/or race [6,24]. Recent studies have also linked newly identified ACEs with higher risks for negative outcomes, such as poverty, poor mental health, behavior problems, and risky health behaviors [7,8,18,21,22,25].
These and other studies have confirmed that historically excluded populations (e.g., Black, Indigenous, People of Color also referred to as people of the global majority) experience additional challenges and are more likely to experience ACEs as a result of systemic inequality and how it shapes their identities and lived realities [5,24,[26][27][28][29]. While we acknowledge the importance of identifying which populations of children are at risk, we argue that it is critical to establish how systemic inequality within and across ecological levels shapes high-risk environments.
---
Systemic Inequality and Co-Occurring ACEs
Although the aforementioned studies built upon ACEs phenomena among historically excluded populations, they generally did not establish how these experiences are constructed by political and socioeconomic systems contributing to high-risk environments. Moreover, the super-majority of studies did not examine how political and socioeconomic factors contribute to experienced adversity among children. In fact, we only identified one study that considered the nuance of sociocultural factors that shaped high-risk environments associated with ACEs [5]. While it is helpful to know which populations need additional support to address ACEs and build resilience among children, it is even more important to know why higher risk conditions exist and to address root causes of inequities that increase the risk of ACEs.
ACEs scholars refer to the differential burden of ACEs as a co-occurring phenomenon, and this, too, is experienced at higher rates among historically marginalized populations [25,29]. The differential burden concept explains why certain groups may experience worse outcomes from ACEs linked to demographic characteristics or social identities [27,29]. Limited access to resources based on one's identity may also play a role in which children affected by ACEs are most likely to access treatment and support. The ability of stakeholders, clinicians, and policymakers to distinguish between demographics and the inequitable environments that raise high-risk conditions for communities made vulnerable is critical to mitigating deficit perspectives and facilitating comprehensive support for children who experience ACEs. For example, higher exposure to ACEs should not be linked to the status of being part of the global majority or belonging to a historically marginalized population; rather, if ACEs are a universal experience, high-risk conditions must be regarded as imposed-upon environments that compound ACEs and inflict additional harm on historically excluded children, families, and communities.
---
Contributions to the Literature
In our review of the literature, no study operationalized a process-centered intersectionality framework or fully discussed how an intersectional approach could advance the analysis of ACEs. The two studies that examined the intersection of ACEs and demographic factors did not expand upon how policies and systems raise the risk of ACEs and stigmatize populations experiencing a higher burden of ACEs; rather, the concept of intersectionality was used to contextualize the introduction of the topic and explain the intersection of factors connected with ACEs [6,30]. Similarly, a study on ACEs and wellness outcomes among Black men who have sex with men introduced the concept of the intersection of identities and dual experiences, but the authors did not consider how researchers might use an intersectional approach to expand upon the understanding of ACEs [29].
Our study contributes to the ACEs literature within various dimensions. First, our approach expands upon the differential exposure hypothesis by explaining the conditions historically marginalized populations are more likely to experience and, thereafter, bridges the topics of high-risk conditions and ACEs. Second, we expand upon the differential burden concept by explaining how policies and systems determine access to vital resources and services for children experiencing ACEs; access to resources and services are important in reducing the risk of intergenerational trauma, abuse, and household dysfunction among children and families [2,31]. We also build upon previous ACEs studies that have espoused an intersectional perspective by being the first to operationalize a process-centered intersectional framework.
---
The Current Study
The purpose of this grounded theory study was to examine how health and social service providers (N = 81) from rural and urban counties in Tennessee provided services to low-income families, children exposed to opioids, and children of immigrants. Specifically, we explored two guiding research questions: (1) How do rural and urban environments shape high-risk conditions for children, families, and communities? (2) How do high-risk conditions compound ACEs and impede access to resources at individual, group, and community levels?
Camacho and Henderson were researchers for the Policies for Action Research Hub study at Vanderbilt University funded by the Robert Wood Johnson Foundation. Camacho was a Policies for Action Research Hub postdoctoral fellow and responsible for developing and leading the qualitative data collection and analysis effort. Henderson was a Senior Health Services Research Analyst who contributed to quantitative and qualitative data collection, management, and analysis. This study is part of a larger, transdisciplinary study that seeks to identify policies and practices in the state of Tennessee that can improve the health and education outcomes among the state's most vulnerable children. The research team constituted nine investigators from the Vanderbilt University School of Medicine's Department of Health Policy and Peabody College of Education and Human Development. Collectively, researchers utilized quantitative and qualitative methods to understand the complex relationships that exist between state-level policies and access to services that either facilitate or impede the ability for children and their families to receive vital health and education services.
---
Materials and Methods
---
Recruitment and Sample
The research team constructed a sampling frame to recruit individuals working in state agencies, local health departments, safety-net clinics, schools, and nonprofits. The research team utilized purposive sampling to prioritize organizations that served marginalized families and were situated in different geographic areas that were designated as economically distressed counties, had higher rates of neonatal abstinence syndrome (NAS), and had higher percentages of Latinx/e populations. Thereafter, random sampling was utilized to select participants from each type of organization to ensure diverse organization representation. They purposely selected two counties with the highest incidences of NAS, and the highest rates of Latinx/e children of immigrants.
Administrative data was used to stratify the sample based on region designation (i.e., west, middle, east) by the Tennessee Department of Education and urbanicity (i.e., town, city, suburb, rural). In addition, distances were calculated using the percent of marginalized populations within each county to select closest to the average Mahalanobis score (this measures the distance between two points and the distribution). After finalizing the sampling frame, Camacho contacted interview participants via email and phone to request an interview either in person or over the phone. Each in-depth interview was 60 to 90 min in length, and we attempted to establish a supportive interview environment by acknowledging the significance of providing support to historically marginalized populations.
We conducted 47 interviews in 26 counties (of the 95 in Tennessee) in all three regions of the state; nine of these counties were designated as economically distressed. Most interviews were in-depth, one-on-one interviews (34) conducted by two or three members of the research team. When possible, we encouraged interview participants to invite work colleagues to be part of the in-depth interview process to better understand organizational policies and practices from various perspectives; subsequently, we conducted a total of 13 focus groups with two to nine interview participants per focus group. Interview participants worked in the following organizations, and we include the number of interviews conducted with each organization: community advocacy organizations (n = 5), community anti-drug coalitions (n = 8), community mental health centers (n = 3), coordinated school health directors (n = 15), county health departments (n = 4), federally qualified health centers (n = 3), Medicaid (n = 1), neighborhood health centers (n = 3), opioid treatment programs (n = 1), school-based health centers (n = 2), and Tennessee early interventions systems (n = 1). A total of 81 health and social service providers participated in the interviews. The study was approved by the Institutional Review Board at Vanderbilt University, and informed consent was obtained from all interview participants.
---
Data Analysis
We employed a grounded theory methodology to examine data corresponding to our research questions and systematically utilized comparative analysis to construct a theory from the dataset [32]. Modeling the intersectional grounded theory research design [33], we assumed a process-centered intersectional approach to guide our understanding of withingroup differences (mezzo) and help identify how inequality functions within structured mechanisms. As developed by McCall (2005) and Davis (2008), an intersectional processcentered method does not limit the intersections of experiences to individuals (micro) but suggests that group intersectional analysis (mezzo and macro) reveals how systemic inequality operates [11,12,14]. Therefore, our data analysis process differentiated between personal and collective experiences to better understand how external factors are placed upon children, families, and communities [34]. In line with this approach, we conceptualized three waves of data analysis whereby the first data analysis utilized the interview guide to develop a priori codes, the second used our theoretical framework to identify emergent constructs pertaining to systemic inequality, and the third catalogued systems and processes that prohibited access to resources and services across ecological levels.
All interviews were transcribed verbatim, and we used NVivo (version 12, produced by QSR International, London, UK), a qualitative software program, to organize the qualitative data and codebook for the qualitative data analysis process. We began the first wave of data analysis by applying Boyatzis's (1998) categorical analysis and derived a priori themes from the initial interview protocol [35]. The following themes helped frame the analysis:
(1) information about the organization and role of the service provider; (2) current public assistance policies and support for vulnerable populations; (3) health and mental health resources for immigrant populations; (4) barriers to service awareness and strategies implemented to address systematic barriers; (5) prenatal and postnatal support for opioid users; (6) neonatal abstinence syndrome and treatment; (7) school-based healthcare resources; and (8) prohibitive policies, systems, and structures. In particular, sections in the interview protocol provided interview participants with the opportunity to respond to the changing needs of historically marginalized families, regardless of the organizational type, so that we could uniformly gauge the capacity of organizations to provide important services and resources during the data analysis.
Thereafter, deductive codes emerged respective to each category and our intersectional process-centered theoretical framework. This thematic analysis produced additional codes that expanded across the categories, including: (1) the imposed-upon environments that historically marginalized families have to navigate at micro, meso, and macro levels; (2) how inequality perpetuates intersectional marginality, prohibiting access to services and/or compounding ACEs; (3) identified and non-identified subpopulations as a consequence of inequality; (4) the different forms of burdens that communities made vulnerable have to navigate and how inequality perpetuates burdens; and (5) the compounding effects of high-risk conditions related to ACEs. Since process-centered intersectionality emphasizes the examination of recognized and unrecognized populations, our data analysis included identifying subpopulations that are additionally impacted by high-risk environments not commonly accounted for in research studies. These identities extend beyond general demographic information such as race, gender, or family structure.
As we engaged in the coding process during the first and second waves of the data analysis, we catalogued systems and processes that prohibited access to resources and support across ecological levels and according to place. This process constituted a third wave of data analysis, and we produced a master document that differentiated the types of prohibitive policies, systems, and processes by type of organization and in relationship to lack of monetary resources, administrative burdens, and need for new programs and sources of support.
Our codebook was piloted three times among Camacho and Henderson, using the same five interviews. Codes were modified until there was 90% agreement among the researchers when coding a sample of responses. Upon establishing intercoder agreement, each interview was coded twice. True to the study design, diverse and conflicting data was regarded as indicative of the complexity of the phenomena. This approach assumed that the findings were not contradictory but, rather, multifaceted. Altogether, the three waves of data analysis were utilized to answer the guiding research questions, and our grounded theory approach ensured that the development of our framework was anchored in the dataset post data analysis.
---
Limitations
Given the extent to which process-centered intersectionality recognizes and elevates experiential knowledge, the lack of participation from historically excluded populations who cannot access health and social services is a methodological limitation. By focusing on the experiences of health and social service providers, this study elevates the experiences of a more privileged population. Therefore, findings from this are not meant to discount the real and lived experiences of historically marginalized populations. Rather, the experiences of service providers are contextualized within our analysis of power per the critical framework employed.
---
Results
Overall, the interviewees identified several factors that shape high-risk environments and compound exposure to ACEs within their service type and community context. Interviewees also described limited access to resources and support due to policy constraints at the local, state, and federal levels which further compounded negative outcomes among children in high-risk environments.
---
Salience of Place: Rural, Urban, and Economic Characteristics
Across Tennessee, health and education service providers spoke in-depth about the salience of place-the ways that rural, urban, and economic status of a county bring forth unique challenges that raise high-risk conditions for the populations served. While both urban and rural communities experience different place-based challenges, in Tennessee, the act of living in a rural community poses additional challenges related to limited socioeconomic opportunities (i.e., employment and living wages) and mobility (i.e., distance between resources and lack of public transit). For many of the interviewees, namely the majority of those who served rural communities with higher poverty levels, poverty brought forth other forms of adversity that either (a) shaped higher-risk environments which subsequently increased the risk of experiencing ACEs or (b) presented additional challenges when children and families needed to access resources and support. To illustrate differences between factors that shape higher-risk environments and the ways in which place can prohibit access to resources and support, we first present place-based challenges linked to poverty followed by related ACEs that may result due to higher-risk environments. Referenced factors that shape higher-risk environments included: food deserts, insufficient number of affordable housing programs due to lower population density, lack of public transit infrastructure, limited number of healthcare and hospital services, workforce recruitment and retainment issues for service providers, limited number of translation services for non-English speaking people, and insufficient number of beds at opioid treatment centers. These experienced place-based conditions meant individuals, families and communities were more likely to experience food, housing, and transit insecurity across ecological levels, as well as have unmet mental and physical health needs. The relationship between higher-risk environments and adverse childhood experiences translated into increased risk of experiencing ACE(s), as well as limited access to resources and support. For example, families in rural or sparsely populated counties lacked access to public transit in under resourced towns, which limited their ability to travel to receive health and social services. According to service providers, lack of resources and access to services contributed to intergenerational cycles of poverty, addiction, and other household-level crises that negatively affected children. Consequently, place was an important factor that often contributed to risk of exposure to ACEs, root causes of poverty, and the accessibility of resources and supports for children and families. The significance of place and factors that shape higher-risk environments and limited access to resources are illustrated in the following interview excerpts: "So, we didn't have a social worker. And the reason that the social worker is so important is, we are in a rural area. Our poverty rate here in [County] is 43.7%. So, I have a lot of students who live in isolation. We have a lot of students that are in transit all the time. I guess they would technically fit under homelessness because they're living with someone else, they're here, there, they're really hard-you know, those are the kids that are truant. Those are the kids who have health needs. So, today, this afternoon, I'll be talking with the department of children's services about continuing that funding for the social worker . . . . So, this is something that is needed, because our students who are in poverty, as you know, they're about seven times more likely to have mental issues or be living in a home where someone has mental issues. And that connection between the classroom and that student's parents, the caregiver, is almost nonexistent. We only have about 60% of the people here who have internet, and then they can't afford a phone a lot of times, or if they do afford the phone, they can't keep it on. Yeah, so the social worker has been able to reach out and go to the home, knock on the door, and say, "Hey, I'm from the school, you know, what do you need. How can we help you and how can you help us, you know, to better educate your child?" It has been a wonderful godsend having her to be able to reach out." (Coordinated school health director in a rural, economically distressed county) "So, we have 50 kids that are not being seen at least on school-based therapy. We try to find them places outside the school. See, here's the issue. We need the school-based therapy. And I'm not speaking just for my school. I'm talking about school in general, because (a) there's a transportation issue, especially in your high-poverty school districts, and (b), if parents have a car, they're at work, and they don't have-you know, these low-paying jobs do not offer sick days and, you know, time off and all that kind of stuff. So, parents are-cannot really take off and take the child to therapy, so-at least our Medicaid people are in that boat. And there are others, you know, insurance folks are in the same boat. I mean, we're seeing insured kids at the school, too, but Medicaid kids are top priority. So that's-we have got to have more focus on school-based therapy." (Coordinated school health director in a city-adjacent county)
From the perspective of service providers, the less competitive wages in rural communities negatively affected their respective organization's ability to hire and/or retain top-qualified health and education service providers in the most high-need, high-risk communities. A rural community may or may not have a hospital or urgent-care services, and schools are often the only regular healthcare service that a child receives, especially when the child does not have a pediatric home; meaning, the child's family or caretaker has not established a primary physician for the child due to an inaccessible healthcare infrastructure per the nature of rurality and/or the lack of public transit.
"The other thing that is-that we have a need for is more mental health inside the schools, and in an impoverished area like this, nobody wants to come here. We have been through five school-based mental health counselors in the last 3 years. We have a partnership with a local mental health agency, and they cannot keep someone employed inside this school. These people are getting money to go elsewhere and, you know, work in better places for more money. So, you know, it's really impacting the kids, because we also have a very high suicide rate. For example, you know, our youngest one is 9 years old." (Coordinated school health director in a rural, economically distressed county).
---
Salience of Place: Sociopolitical Context and the Culture of Care
Given the limitations imposed by rurality, we found that place also largely influenced how the culture of care was organized. Health and education social service providers acknowledged and understood the key roles they played in making services accessible to their respective communities and oftentimes referenced their longstanding commitment and role in being a social service provider. For example, they described how they cultivated relationships with public and governmental organizations in their community and how they utilized relationships at times to broker favors for the populations they served. While most social service providers utilized their capital to support historically excluded populations, service providers at times revealed their political beliefs and/or understood that providing care was not an apolitical process. Accordingly, this means service providers can provide support, to a certain extent, at their own discretion. Their decision to provide support is informed by their personally held beliefs, values, and who they deem to be deserving of such help. For example, statements from interviewees, such as "We take care of our own" and "We know everyone," were illustrative of their close-knit communities and an internal network that is assumed to be accessible by anyone in the community. However, the sociopolitical nature of insider/outsider dynamics and the ways service providers rely on support from faith-based communities, access to care can be determined by the social capital an individual, group, or population espouses, in addition to whether they possess membership in certain community groups. Below is an excerpt from an educator and health service provider on how he would navigate potential challenges when serving immigrant communities: "One of the first things I would do is call our county health department over there, which we have developed over 12 years, a very close relationship, and just like this situation here, they will give me some advice . . . so then the administrator will take it to their PTA or PTO to try to get some help . . . well, let me say we don't turn anybody over to ICE. We do not send anything to them. We're going to deal with the child and the family and so what we'll do is work through translators and so forth, we do have a-we have a person who works part-time here in our offices that worked for the county many years and has many connections in the county. They will help that father try to find a job, if needed, find somewhere to live, which is a problem in our county is housing, but they will try to find them a place to live temporarily so that that parent can actually start making some money, and then we'll monitor them to-you know, they'll do what they need to do as far as immunizations for the child, making sure they're in school all the time and so forth. We'll try to help them out as much as we can, if somebody else turns them in, we can't help that, but we will not let-We will not let-We will not let the federal government on our campuses to pick people up as their getting their child or dropping them off and that sort of thing . . . I had to chase them [ICE] off one of our primary's campuses last year." (Coordinated School Health Director in rural county) Additionally, changing populations across the state present challenges to families accessing services who oftentimes rely on service providers to assist them in navigating administrative requirements [9,36,37]. Population changes have also resulted in increased service needs for certain families; these changes include short-and long-term consequences of drug crises, the increasing number of grandparents and great-grandparents who serve as primary caretakers of their children, and a growing immigrant-origin population, particularly in predominantly White communities. Service providers in the study shared examples of complex situations they had to navigate, often because of the changing population in their community, that required them to provide additional support such as financial assistance or overcoming language barriers.
A service provider from a federally qualified health center in an urban county shared the following:
"We are able to do so little in those [high-risk] circumstances because on that, on top of that, it's not uncommon for the dad to be suicidal or there's someone in the home that is abusing alcohol or dad-and we've had this happen-dad is HIV-positive, and we find out the baby is HIV-positive. The mom is not there, so we don't know. But we've had-now what do you do? And then they don't have a place to stay. So, I'm going to just add that more to you because this is-this is our every day. We literally have-we've had-in our clinic 2 weeks in a row where the mother-was the father there? The father was okay. The mother was HIV-positive, and at least two out of the three kids were HIV-positive. And they didn't even know. And we are like-and you know, they don't speak English, so I'm just trying to see-this is our normal. That's our every day."
---
Salience of Place: Policies That Inhibit Access to Resources and Support
Health and education service providers understand that policies and systems can significantly influence access to resources for families, particularly those facing additional barriers (e.g., income status or degree of "belonging" within a community). Interviewees described both having insufficient monetary funds to meet the growing needs of the populations they served and working to support marginalized populations that did not have access to resources or economic structures needed to break through various types of poverty cycles due to stringent programs and policies. Interviewees referenced several stateand federal-level policies in connection with barriers that service providers and families navigate across a variety of care settings and county types. In the following illustrative example, an interviewee describes how state-level policies around resource allocation had significantly impacted their ability to meet the needs of children and families:
"But we have a high rate of suicide and mental illness in the region, and I feel like that money should be allocated to areas that are in most need. But what I'm seeing a lot of times is, "Oh, we're going to give it to the bigger places," and what you have there is places that have more money, they have more resources, and then of course your impoverished areas, your small rural areas where nobody wants to come, we can't even afford to hire anybody at this time because the money has been given to bigger places . . . . We need to establish funding that is more reoccurring to the district, and every district, every district on that. Last year, I was able to secure in-kind and grant funding for our district, and that is a huge help to us, especially when, you know, you're in a really small district, and we don't get a lot of funding anyway, especially when it's based on [the state education funding formula]. They're just not going to give it to us. And so, our kids-our kids do without. And probably I would think our kids have more of a need than, you know, some of the bigger schools, you know, get [funding]. You know, I know they have needs, but I doubt that their poverty rate and their mental health issues and their opioid issues here, it's just not the same as it is here. I mean, we are in a crisis here." (Coordinated school health director in a rural, economically distressed county) Additional policies and systems that contributed to economic inequality and poverty among families with whom the interviewed service providers worked included: nonexpanded Medicaid, non-livable minimum wage, increasing cuts to federal and state programs meant to support low-resource households (i.e., social safety net programs such as Supplemental Nutritional Assistance Program, Women Infants and Children, etc.), and decreasing or stagnant investment in health and education programs. These economic disparities were further exacerbated in rural counties and between rural counties due to non-comprehensive measures of poverty, non-equitable investments in employment opportunities across counties, and lack of affordable or physically accessible childcare options depending on where one lives. Interviewees discussed several examples of how these measurements of poverty and other county metrics used by the state disadvantaged their ability to be prioritized for additional resources due to low population density, among other factors. Such economic-based dynamics are especially detrimental to low-income and working poor families.
---
Intergenerational Experiences of Adversity within Communities
The majority of interviewees spoke in depth about the interwoven place-based factors that created high-risk conditions, as well as the intergenerational nature of ACEs and forms of adversity experienced by a high proportion of families within a community. In their descriptions, service providers often explained how environmental factors influenced family-level factors, family structure, and, subsequently, a child's risk of ACEs. What follows are two illustrative quotes from service providers who link factors that shape highrisk conditions with their home life and respective ACEs linked to high-risk conditions. To provide additional context for the first excerpt, service providers from this particular drug coalition in a rural, economically distressed county described how high-poverty rates in their county had been the status quo since the 1970s due to the shutdown of the coal mining industry in their geographic area. For many generations, the majority of people living in their area did not have access to many full-time employment opportunities, jobs with adequate salaries, or the ability to develop employment skills. High-risk conditions for individuals, families, and their communities worsened due to an exponential number of pill mills that contributed to the opioid crisis before the government recognized the crisis. The service provider then proceeded to link these factors with various forms of adverse childhood experiences: "There is an actual poverty rate, a 27.7% . . . but [what] that is saying [is accepting poverty rates]-which makes me so angry because we have said to these students, to this group of youth, "Hey, what are we going to do?" That it is okay. And it's not. I mean, now try to tell those kids that they have more worth and value than that, you know? They're living with drug-infested homes. They're living with all of the problems. I mean, it's not just mothers and fathers. It's their grandmothers and grandfathers that are doing this. I had a young man tell me the other day that, you know, he was sitting at my feet, and he said-because he calls me grandmahe said, "I have watched my grandma take [motions injecting arm]-you know, tie off and shoot up in front of me and then she would pass out." And he said that was so scary. And he said what was even scarier, when [he] had to spend the night and all the roaches in the house. You know, this is the reality of what these kids are really living with. And the principal at [the local high school] told me at the very beginning of this, she said, "Our students can't come in here and worry about a chemistry test or, you know, what's going on in high school when they're more concerned about am I going to have food? When I get out at night, who's going to pick me up from school? Will I be allowed to ride that bus? You know, are the things going to be taken care of for me?" And so, they have no worth and value, so they step right into the paths of their parents. They're doing-making the same mistakes. This is generational mistakes in this community." (Director of a community anti-drug coalition in a rural, economically distressed county)
The above illustrative quote references intergenerational drug abuse in the home accompanied by uninhabitable living conditions that present immediate health risks for children and families. The excerpt also references experienced food insecurity, transit insecurity, and an inability to rely on basic needs being met among children living in the community. According to service providers, the identified ACEs are experienced amid the consequences of social, political, and economic contexts that shape the salience of place. To showcase how the salience of place impacts intergenerational family histories, we present the following excerpt from a staff member who was responsible for working with youth within a drug coalition in a rural, economically distressed county: "I had a young man who was brave enough to come down and I had all this [ACEs] logic model and all of my curriculum all set up just so perfectly, and he came down, he said, "I grasp the concept of what you're trying to do here, and it's good," he said, "but you're missing the mark." And he began to really tell me, "You know, when you live in domestic violence, when you live in abusean abused home, and you're-you become a bully, and you-you know, that deals with your mental health. That starts, you know, all your mental health issues that are going on during this, you turn to drugs and alcohol. That's how we self-medicate." . . . He took the black pen and really started writing, "This leads to homelessness." You know, he's a foster child. He came to [this] county because he was in foster care. This is-it just makes so much sense. If you can help them to understand these issues, what led us there . . . it helps you understand that we don't have to go there . . . . Living in these issues, when you live like that, it becomes your comfort zone, even if you don't like it. You become comfortable in this crazy, you know, wacky environment." (Staff member of a community-anti-drug coalition in a rural, economically distressed county)
To provide additional context for the above quote, the staff person had described how this child had experienced additional challenges as foster youth having moved from another county without a "close to kin" foster parent. In this case, the child was not just a foster youth, but a child who resided in a physically isolated geographic area, without familial mentorship to meet basic child development needs, who had a debilitating selfesteem as a resident of an economically distressed and rural county. The various identities that are referenced by the child are as follows: unhoused (homeless), victim of domestic violence, compromised mental health status, substance use disorder, intergenerational unhoused status. This quote highlights how high-risk conditions permeate family histories and impact generations.
---
Recognizing Additional Subpopulations and Identities
Guided by process-centered intersectionality, we identified subpopulations (see Appendix A) not commonly accounted for in research studies. Service providers identified these subpopulations according to their social and economic standing, place of origin, geographic location, family history, and experienced high-risk conditions. Our data analysis excavated 119 identities, either referenced directly by service providers or derived from the qualitative data analysis. While some identities are broadly recognized (i.e., race, gender, age, family structure), we compiled a list of additional identities that may contribute to one's likelihood of exposure to ACEs or one's likelihood of experiencing barriers when accessing services. From that point of reference, we developed the Intersectional Nature of ACEs Framework, illustrated in Figure 1.
The Intersectional Nature of ACEs Framework illustrates that ACEs compounded by high-risk conditions are first and foremost undergirded by the consequences of social, political, and economic contexts which in turn shape the salience of place. The salience of place is not experienced during a particular moment; rather, the relationship between policies at federal, state, and local levels and the demographic and economic composition of place determine access to resources. This in turn shapes a dynamic culture of care that determines experienced access to resources among historically excluded populations and subpopulations per their espoused sociopolitical capital. Ultimately, intersectional identities across ecological levels are a result of the dynamism of high-risk conditions which can be traced back to systemic inequality. To illustrate the relationship and centrality of the consequences of political, social, and economic contexts, the preceding co-constructs the salience of place and intersectional experiences across ecological levels.
The quotations and accompanying descriptions in Table 1 exemplify the intersection among ACEs, salience of place, and high-risk environments, informed by the Intersectional Nature of ACEs Framework. This framework builds upon previous ACEs frameworks, highlighting underlying and often upstream factors that may connect ACEs within the life of a child, a family unit, or a community. Operationalizing an intersectional lens in the study of ACEs allows for a non-cumulative measure of adversity that is not only affected by how many ACEs one experiences, but also by intersectional identities one can possess, thereby examining compounding effects of ACEs. Table 1 includes examples of ACEs experienced by children within various place-based contexts; these contexts have varying risk levels of ACEs and are also affected by broader policies and systems that affect marginality. Thus, the Intersectional Nature of ACEs Framework emphasizes the multiple levels of risk factors that affect exposure and access to resources, focusing on high-risk environments and upstream factors to expand upon previous approaches that primarily emphasize family and individual-level factors. gender, age, family structure), we compiled a list of additional identities that may contribute to one's likelihood of exposure to ACEs or one's likelihood of experiencing barriers when accessing services. From that point of reference, we developed the Intersectional Nature of ACEs Framework, illustrated in Figure 1.
Figure 1. The Intersectional Nature of ACEs Framework a . Note. a Subpopulations experience high-risk conditions as a result of social, political, and economic contexts at individual, group, and community levels due to systemic inequality. Subpopulations identified in our interview data did not comprise an exhaustive list of place-based determinants. To see the comprehensive list of place-based determinants derived from this study which inform the relationship between the salience of place and intersectional experiences, see the Appendix A. b Place-based contexts can determine high-risk conditions and are constructed by the physical nature of place, as well as governing policies, systems, and processes that determine access and availability of resources, programs, and services. However, access to resources, programs, and services is additionally materialized by the culture of care and how individuals, groups, and communities are recognized. Community characteristics can include the physical location of resources, transportation access, and changes in population and place-based crises that compound high-risk conditions (e.g., opioid crisis). c Access to social services and resources mitigates factors that construct high-risk conditions which in turn lessen ACEs.
---
Figure 1.
The Intersectional Nature of ACEs Framework a . Note. a Subpopulations experience highrisk conditions as a result of social, political, and economic contexts at individual, group, and community levels due to systemic inequality. Subpopulations identified in our interview data did not comprise an exhaustive list of place-based determinants. To see the comprehensive list of place-based determinants derived from this study which inform the relationship between the salience of place and intersectional experiences, see the Appendix A. b Place-based contexts can determine high-risk conditions and are constructed by the physical nature of place, as well as governing policies, systems, and processes that determine access and availability of resources, programs, and services. However, access to resources, programs, and services is additionally materialized by the culture of care and how individuals, groups, and communities are recognized. Community characteristics can include the physical location of resources, transportation access, and changes in population and place-based crises that compound high-risk conditions (e.g., opioid crisis). c Access to social services and resources mitigates factors that construct high-risk conditions which in turn lessen ACEs. Neglect "Well, we were kind of used to recession and poverty because it started back in the '70s, okay? So, we was kind of used to that, but we didn't really know how to evaluate and identify because we had to get ourselves trained, and that's why it was so good when the coalition concept come in and really showed us and trained us on how to evaluate our population, and then everybody went through a decline economically for a long time. We went from where you could not get a job . . . . The unemployment rate at one time was up to 26%. We had the highest unemployment rate in the state. Five years in a row. Five. In the mid-2000s. Because there wasn't a lot of opportunity here, okay? The factories wasn't-weren't-we're a manufacturer type population. We're not a highly skilled labor population, okay? So, excuse me, we went through that, then we seen the drug epidemic starting, and really, I mean, it started as a means of sustainability, people selling their medications to pay their electric bill, and a self-coping mechanism. People were using it because they were depressed. Self-medicate. So that population and the opioid population, along with the country, just became an epidemic here." Collectively, these findings illustrate a more complex understanding of ACEs and how service providers experience the context of their community and/or their capacity to support these families. Identifying these high-risk conditions highlights the need to respond to ACEs not only on an individual level, but also on family, community, and state levels.
---
Economic recession
---
Discussion
Inequitable environments are powerful forces that impose additional identities upon historically excluded populations. Children, families, and communities living in high-risk environments who are experiencing ACEs may have difficulty accessing resources due to the intersecting identities they possess. In this study, the high-risk conditions described by service providers across Tennessee illuminate the environmental factors contributing to co-occurring ACEs and challenge scholars and practitioners to reconceptualize the way programs and policies support these families. While original ACEs research focused on adverse childhood experiences individually as events to examine using a cumulative measure, we encourage program administrators to focus on high-risk environments where multiple ACEs may be connected by underlying mechanisms; we also recommend that public administration officials focus on policy solutions that allow families to break out of generational poverty that may contribute to the risk of experiencing ACEs. Inability to access services while living in poverty may specifically contribute to neglect, maltreatment, and foster care. The opioid crisis in Tennessee, for example, and subsequent high-risk environments may facilitate the co-occurring ACEs of substance abuse in the household, mental illness in the household, various types of abuse, foster care, death of a family member, and neglect. Evidently, health and education service providers must consider the basic needs (e.g., adequate food and housing) of historically excluded populations with their specific services. Program administrators should prioritize funding programs that allow providers to care for families and children comprehensively, with additional training on responding to a high-risk situation holistically instead of only assessing a child's well-being with a traditional ACEs screening tool.
Considering the Intersectional Nature of ACEs Framework that emerged from this study, we emphasize the importance of developing health and education resources and services with practices that maximize safety nets. First, when possible, service providers should incorporate cradle-to-grave (lifelong) support and programming. As evidenced in this study, the challenges that historically excluded populations must navigate are many, and it will be impossible for them to navigate complex systems without major reform of current processes. To offset compounding effects from high-risk environments, service providers should redevelop current programs and services to meet the needs of diverse populations, regardless of which stage-in-life services are accessed. This safety-net approach will meet historically marginalized populations within their experienced reality, as well as mitigate the consequences of inequitable environments that ultimately rob people of their agency, human dignity, and ability to realize their full selves. Second, given that ACEs may be rooted in intergenerational high-risk conditions, service providers should assume multigenerational support as a form of safety net. This approach will enable services and resources to be developed for different-aged populations with the understanding that the objective is to prevent younger generations from inheriting high-risk conditions to break out of high-risk conditions. In the interviews, the multigenerational approach was described as vital by service providers as well as ACEs researchers and providers who had developed programs serving multiple family members together in high-risk situations, such as addiction and parent history of ACEs [38,39].
Situated at the frontlines, service providers can mitigate ACEs and espouse tremendous social capital; they have the agency to either discriminate or use their knowledge to help historically marginalized populations navigate complex social service systems. We strongly advise a higher level focus on improving policies and systems that help families break out of generational poverty and intergenerational cycles of ACEs-mobilizing for under-insured and uninsured populations, exposing the depths of poverty in their communities, educating policymakers, working across organizations to increase protections for the working poor, etc.-so that historically excluded populations can think beyond their immediate survival and work toward realizing intergenerational mobility.
Previous studies have expanded upon the 10 traditional ACEs but have not necessarily contextualized high-risk, inequitable environments and multidimensional elements of place that intersect with ACEs. With the Intersectional Nature of ACEs Framework in mind, we present the term repression-ACEs to differentiate between ACEs that are the consequences of social inequities, such as neighborhood violence and racism, and ACEs that are inflicted directly by a person. Repression-ACEs signals the ways in which ACEs are constructed by higher-risk environments that are the consequences of social, political, and economic contexts that shape the salience of place. We hope this term will differentiate the power of imposed-upon environments and the collective responsibility to disrupt harmful policies and systems.
---
Conclusions
Espousing a critical intersectional approach and utilizing in-depth interviews to understand the ACEs phenomenon, this study's results make a significant and interdisciplinary contribution to the ACEs literature and deepens understanding of the social problems service providers navigate. When mitigating ACEs through prevention and support, scholars and practitioners need to consider the precarious place in society children inhabit and how policies and programs have the potential to worsen high-risk conditions. If scholars, policymakers, and practitioners provide high-risk communities with sufficient resources and support, the collective efforts can hopefully prevent children from experiencing the very worst consequences of childhood abuse, neglect, and household dysfunction.
worsen effects of ACEs. Each of these identities were referenced directly or indirectly from our interviews with 81 service providers. English as a third language, Spanish as a second language (Indigenous language is native language) 31. Employment Employed Working Poor Underemployed Unemployed Exploitative work conditions Underserved (i.e., no access to healthcare benefits) Non-highly skilled Non-highly skilled, manual intensive labor Essential 32. Housing Housed Housed without basic necessities (e.g., running water, electricity)
---
•
Housing insecure English as a second language 3.
Non-English speakers 4.
Kin Foster Parents 5.
Close to kin, lives in same county 6.
Non-kin Foster Parents 7.
Foster youth 8.
Guardian grandparents and great grandparents 9.
Temporary kin guardians 10. Special needs guardians 11. Residents of immigration raid county and perceived of being Latinx/e origin and/or undocumented 12. Residents of immigration raid whose fear of being profiled prohibits their ability to drive, travel, and seek health and education support services 13. Residents of distressed county 14. Residents of rural county 15. Residents of a high-risk geographic area (proximity to pill mill, cartel route, Appalachia, mountainous, and/or physical divide that perpetuates socioeconomic stratification, etc.) 16. Residents who do not have access to a hospital or urgent care (healthcare desert) 17. Women who live in healthcare deserts and do not have access to women's healthcare 18
---
Data Availability Statement: Interview data used in the study cannot be publicly shared.
---
Acknowledgments:
The authors would like to thank the P4A Vanderbilt research team for their contributions to the qualitative study materials, as well as their feedback on the manuscript.
---
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
---
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study, collection, analyses of interpretation of data, or manuscript writing.
---
Appendix A. Subpopulations at Individual, Group, and Population Identities
Sub-populations are not commonly accounted for in research studies. These subpopulations were identified by service providers according to their social and economic standing, place of origin, geographic location, family history, and experienced high-risk conditions. The expanded-upon populations in this list present additional information about the experience that was referenced by health and social service providers. Higher-risk environments construct circumstances that may contribute to higher-risk conditions and | 58,529 | 1,472 |
5ebe5089c96f5ffa854b1e835a4e30d1a0b9b20c | Knowledge, Attitudes, and Practices of Sex Workers of Three South African Towns towards Female Condom Use and Contraceptives | 2,023 | [
"JournalArticle"
] | Female sex workers are a marginalized and highly vulnerable population who are at risk of HIV and other sexually transmitted diseases, harassment, and unplanned pregnancies. Various female condoms are available to mitigate the severity of the consequences of their work. However, little is known about the acceptability and usage of female condoms and contraceptives among sex workers in small South African towns. This descriptive cross-sectional study of conveniently selected sex workers explored the acceptability and usage of female condoms and contraceptives among sex workers in South Africa using validated questionnaires. The data were analyzed using STATA 14.1. The 95% confidence interval is used for precision, and a p-value ≤ 0.05 is considered significant. Out of 69 female-only participants, 49.3% were unemployed, 53.6% were cohabiting, and 30.4% were HIV positive. The median age of entry into sex work was 16 years old. Participants reported use of condoms in their last 3 sexual encounters (62.3%), preference of Implanon for contraception (52.2%), barriers to condom use (81.2%), condoms not being accepted by clients (63.8%), being difficult to insert (37.7%), and being unattractive (18.8%). Participants who reported barriers to condom use were 90% more likely to have adequate knowledge than those who did not (PR = 1.9; p-value < 0.0001). Knowledge of condom use was an important factor in determining knowledge of barriers to their use. Reasons for sex work, sex workers' perceptions, and clients' preferences negatively affect the rate of condom use. Sex worker empowerment, community education, and effective marketing of female condoms require strengthening. | Background
Sexually transmitted infections (STIs), including HIV/AIDS, are a major public health burden in South Africa [1,2]. The epidemic is complex and thought to be influenced by a number of factors, including biological, behavioural, societal, and structural factors [1]. Although the contraceptive utilisation rate is high in South Africa, at 64% among sexually active women, unplanned and teenage pregnancies are an on-going problem [3,4]. Many South Africans are believed to be using condoms for HIV prophylaxis, but there are challenges with the use of condoms in certain communities [1,[5][6][7][8][9][10].
The South African government supplies free male and female condoms to the population and has positions in numerous areas considered hotspots. This includes hotels, shops, taverns, health facilities, brothels, and every place with a dense population at the same time. One of the biggest problems is that even if condoms are used, they are not used consistently, especially in long-term relationships and among those who engage in high-risk sexual practices, such as sex workers [11]. Much of the high HIV prevalence in South Africa is attributable to the inability of women to negotiate for safer sexual practices, often because of age disparities or financial dependence [11,12]. Lack of female-controlled prevention methods also plays a significant role in a woman's HIV risk [11,12].
Male condom compliance requires cooperation from the woman's male partner(s), something that is not always possible in abusive relationships [8,11,12]. Another option to prevent STIs, HIV, and unplanned pregnancies, that is, in women's control, is the use of female condoms [5,8]. This is both liberating and empowering for the woman, as she is in control of the situation and can practice safer sex if she wants to [8,11,12]. The literature suggest that women's empowerment and strategies such as the active promotion of female condom use can play a huge role in addressing challenges such as the high rate of STIs, teenage pregnancy, and HIV/AIDS [8,[11][12][13].
Female condom use in Africa is realistic, and it provides women with more independent protection [13]. It is an alternative that is in the woman's control, with less need to rely on the male partner's cooperation or negotiation skills [14]. However, despite the known effectiveness of female condoms in preventing STIs (and thus reducing their prevalence) among sex workers and women in general, there is low uptake among women, including female health workers [8,14].
The acceptability of the female condom among women faces two obstacles: the reaction of the woman's regular partner and attitudes towards the device itself (appearance, difficulties, or uneasiness concerning its use) [12,13]. It has, however, also been established that the use of a female condom may cause more stigma and challenges for women [15].
As earlier highlighted, sex workers are often at a higher risk of HIV [8,11,15]. They could benefit from the increased promotion and accessibility of female condoms, as it has been shown that an increase in female condom promotion is positively correlated with an increase in female condom uptake among sex workers in Thailand and Madagascar [8]. Equipped with the correct knowledge, sex workers could then also be recruited as peer educators and advocates of safe sexual practices during their trade [16].
In Calcutta, India, the Sonagachi Project employed sex workers as peer educators to distribute condoms, advise peer sex workers on where they can get health services, and disseminate information promoting behavioural change [17]. The 59% rise in condom use in the same period can be attributed to this collaborative model [16,17]. Other countries' sex worker advocacy organisations, such as South Africa's Sex Workers Education and Advocacy Task Force (SWEAT), have adapted the same model of peer education to their context [18].
With no studies reporting on sex work and female condom use in most parts of South Africa, little is known about the acceptability and usage of the female condom among sex workers. This is despite the fact that South African female sex workers are known to have a high HIV prevalence and incidence and are responsible for a significant role in the transmission of HIV [19]. This is because they are known to have unprotected sex with their romantic partners and some of their clients [19,20]. Furthermore, because sex work is illegal in South Africa, advocacy for their sexual and reproductive health is limited [21,22].
This study therefore aimed to determine the knowledge, attitudes, and practices of Grahamstown, Rustenburg, and Brits female sex workers on the use of female condoms and contraceptives. This descriptive study also explores factors that could have informed their knowledge, attitudes, and practices. This study will generate new ideas and new strategies that can be implemented to promote female condom use among women, suited to their needs.
---
Methods
---
Study Design
The study is a quantitative design that surveyed participants over 10 days in 2018. Where it is not explicitly stated to be a female condom, the phrase "condom" refers to condom use in general.
---
Study Setting
The North-West (NW) and Eastern Cape (EC) provinces are two of South Africa's nine (9) provinces located in the north-west and south-east of the country, respectively. The study was conducted in two brothels in Brits and Rustenburg (NW) and in two brothels in Joza township in Grahamstown (EC) between 20-29 September 2018. These are small towns in predominantly rural provinces with similarities in that they have high proportions of truck rest stations and migrant labourers [23]. As mining towns, both Brits and Rustenburg have high proportions of migrant male workers who work far from their wives and thus serve as a good market for female sex workers.
---
Population and Sampling
The target population for this research involved female sex workers of all ages trading at Brits and Rustenburg, under the jurisdiction of Bojanala District in the NW province, and Grahamstown, under the jurisdiction of Cacadu District in the EC province.
The principal investigator (PI) met with all the respondents at their risk reduction workshops (RRW), held on selected days of the week. This setting made it easier since they were not focusing on clients at the time. Seventy participants were offered consent forms on three occasions during the workshop, and all were returned signed as all participants were willing to participate. Surveys were undertaken in a private room within the center where none of the other participants could overhear.
---
Measurements
A researcher-administered questionnaire that was translated into isiXhosa and Setswana was used to collect demographic information, knowledge, beliefs, attitudes, and practices regarding condom use (mostly the female condom), and questions on sexual activity. The latter questions (on sexual activity) were adapted from the youth risk behaviour survey [24] and also incorporated common themes (sexual risk reduction, condom promotion, access, cost, and availability) found in literature [8,11,12,14]. Nine questions were used to assess knowledge of the female condom and contraceptives, the availability, costs, and effectiveness of the former in preventing HIV and STIs in general, and contraceptive options available in the South African public health sector. The content validity was reviewed by two experts (a health promoter and a public health medicine specialist), and there was 100% agreement on clarity, and the content validity index was 1.0. A knowledge score of at least 50% was considered adequate. The views of participants on female condoms were assessed for the best possible view on four options that were not necessarily mutually exclusive to ascertain the commonly expressed view. Perceptions of female condom use were assessed using a 3-item Likert scale (disagree, neutral, agree), where neutral was equivalent to being unsure and/or never having used a female condom before. An exception is the perception of access to female condoms, which also uses a 3-item Likert scale (very difficult, somewhat difficult, and not difficult). The translation of the questionnaire and the presence of a single interviewer for all participants enhanced the reliability of the study's findings.
---
Data Management and Statistical Analysis
All variables were captured and coded in Microsoft Excel 2013 and exported to Stata 14.1 for analysis. The numerical data were explored using the Shapiro-Wilk test. While numerical data that were normally distributed (age of participants and the age of entry into sex work) are summarised using the mean, standard deviation (SD), and range, numerical data that were not normally distributed (age of sex debut and the average number of daily sexual clients) are reported using the median and interquartile range (IQR). The two-sample t-test for equal variances was used to test the equality of two means by province, where numerical data were normally distributed, and the Wilcoxon sum rank test (Mann-Whitney U test) was used to test for the equality of two medians if data were not normally distributed.
Categorical variables are presented using frequency tables, percentages, and graphs. Two proportions are compared using the two-sample t-test of proportion. Two numerical variables are compared using the Spearman's correlation. Simple linear regression is used to compare two different associations of knowledge (age in years and the average number of daily sexual clients). Binomial logistic regression is used for bivariate associations of knowledge for the overall population. The prevalence ratio (PR) is a measure of the association of knowledge. The 95% confidence interval (95% CI) is used to estimate the precision of estimates. The level of significance was a p-value ≤ 0.05.
---
Ethics and Legal Considerations
The Walter Sisulu University Human Ethics and Biosafety Committee granted ethical clearance and approval for the study to be conducted with an ethics approval number (HREC: 005/2019). Each participant gave informed consent; confidentiality was maintained, abiding by the four ethical principles of autonomy, beneficence, non-maleficence, and justice. Participation was completely voluntary without a promise of financial and/or personal incentives.
---
Results
A total of 70 participants were interviewed, but one participant from the North-West province was excluded due to inconsistent information on pregnancies and three other variables. As a result, only 69 participants are included in the final analysis, of which 21 (30.4%) and 48 (69.6%) were from the Eastern Cape (EC) and North-West (NW) provinces, respectively. The demographic characteristics are shown in Table 1. On average, participants were 32 years old (sd = 7.2, range = 18-46); the youngest age of entry into sex work was 16 years, and the average age of entry was 22.8 years (mean = 22.8; range = 16.0-35.0).
More than half of the participants (53.6%) were cohabiting; 15.9% had multiple sexual partners; 82.6% had at least a matric as the highest level of education; and 21.7% had a tertiary qualification. Thirty-five (50.7%) had a job ranging from being a peer educator (26.1%), being a cashier or an intern (7.3%), administrative or general assistant work (4.3%), and volunteering as a police reservist (1.4%).
All participants knew their HIV status, and there was a prevalence of 30.4% (95% CI: 20.5-42.5), which comprised 19.0% and 35.4% of participants from the Eastern Cape and North-West provinces, respectively. Implanon was the most commonly used contraceptive (52.2%), followed by 34.8% who were on injectable contraceptives, and this trend was a reflection of the picture in the two provinces.
The first sexual encounter was voluntary for almost two-thirds of the participants (65.2%, 45/69); 92.8% of participants reported to have begun sex work due to poverty or unemployment; and 89.9% of participants had been pregnant before (Table 2). Participants from both provinces reported that they had a minimum of four daily sexual clients (median = 6.0; IQR = 5.0-8.0).
Overall, 68.1% of participants had either been pregnant once or twice, and 22.0% had been pregnant 3 to 5 times. Only 20.3% of participants reported ever having an abortion. Twenty-eight participants (40.6%) had a single child, 30.4% had two, and 15.9% had three or more children (Table 2).
Condom use in the most recent sexual encounter was reported by 82.6% of participants (Table 3). Even though condoms were reported to be advantageous by most participants, 81.2% of participants reported barriers to condom use. Such barriers included nonaccep-tance by clients, resulting in a negative impact on their income (63.8%). In other instances, clients refused to use a condom, and this was reported by 36.2%. Whereas 30.4% of participants found female condoms to be a useful preventative method, 49.3% found female condoms to be uncomfortable (Table 4). Whereas 89.9% of participants had not used a condom in the past 3-months, 17.4% reported having used a female condom at least once during the course of the most recent twelve months. The data in Table 5 show further perceptions of the female condom. Only 8.7% felt female condoms were easy to insert, only 7.3% felt they enhanced pleasure, and 81.2% confirmed that they were adequately promoted. A total of 65.2% of participants reported having consumed alcohol during their most recent sexual encounter. Of all the participants, only one reported drug use before the most recent intercourse.
Adequate knowledge was attained by 76.2% EC participants and 29.2% NW participants, respectively. Overall, those who reported barriers to condom use were 6.7 times more likely to have adequate knowledge than those who did not, and this was statistically significant (PR = 6.7; 95%CI: 1.01-45.0; p-value = 0.004). Furthermore (Table 6), EC participants were 2.6 times more likely to have adequate knowledge than NW participants, and this was statistically significant (PR = 2.6; 95%CI: 1.6-4.3; p-value = 0.003). There was no statistically significant association between HIV status and the level of knowledge (PR = 0.6; 95%CI: 0.3-1.2; p-value = 0.099). A 1-year increase in age led to a 0.4% reduction in knowledge score, which was statistically significant (p-value = 0.032); despite this, however, only 6.6% of the variation in knowledge score could be attributed to the linear relationship it had with age (R 2 = 6.6%) (Table 7 and Figure 1). Similarly, the addition of a single sexual client resulted in a 1.9% reduction in knowledge score, and this was also statistically significant (p-value = 0.002); as with age, only 13.3% of the variability in knowledge score could be attributed to its linear relationship with the average number of daily sexual clients (R 2 = 13.3%). None of these associations were statistically significant when stratified by province. Figure 1 further illustrates the knowledge score for the sex workers in South Africa.
---
Discussion
This study sought to understand the knowledge, attitudes, and practices of sex workers towards female condoms and contraceptive use in the South African context.
One of the most critical but under-valued strategies for reducing HIV incidence, other sexually transmitted diseases, and unwanted pregnancies is understanding highrisk populations and their reasons for uptake or failure to utilise interventions to help them. The understanding of the baseline knowledge should trigger behaviour change, habits (occupational practices, alcohol use, multiple sexual partners, etc.), perceived threats, perceived susceptibility to an adverse outcome (e.g., HIV infection, loss of income, etc.), and perceived benefits of behaviour change [25].
In the absence of government-driven programs for sex workers in South Africa, this study therefore adds value to the paucity of literature in this area, not only to inform the design of interventions but also to help find alternative methods of engaging stakeholders that could extend beyond female sex workers in the design of health interventions [16].
Participants in this study reported having begun sex work due to poverty or unemployment, even those with tertiary qualifications. This is consistent with previous UNAIDS findings, which reported that some individuals choose sex work as an occupation, but for some communities, it remains a means of survival, with as many as 86% of Canadian female and child sex workers from indigenous communities having a history of poverty and homelessness [26]. Other factors reported to sex work include a lack of education and/or employment opportunities, marginalisation, addictions, and mental illness [26]. This often affects the younger population, which is still in its prime. A peri-suburban South African study reported a median age of 31 years among sex workers in Soweto [27].
It is impressive to note that more than 80% of the participants who were only females had at least a matric level of education, which puts them above average when compared to any general 25-year-old South African women living in urban and peri-urban areas, whose high-school education attainment was measured at 68.2% [28]. In South Africa there is a high rate of unemployment, with graduates struggling to find jobs. Poverty and unemployment are the most contributing factors. That is the reason why we see young people with matric being in the sex work industry: it is because of poverty and hunger. This also contradicts the association of sex work with a lack of education as seen in other societies elsewhere in the world [26]. Individuals with such a level of education are therefore expected to grasp health promotion with ease if their knowledge is to be enhanced [8,23].
Even though the HIV prevalence of 30.4% is higher than the South African adult population prevalence of 20.4% reported for 2018 [2], it is slightly lower than the South African antenatal HIV prevalence of 30.8% reported in 2015 [2,29]. The HIV prevalence is also far lower than the estimated HIV sex-worker prevalence reported by the United Nations for 2018 of 57.7% [2]. In a study by Coetzee et al., an HIV prevalence of 39.7% was reported in a study population of sex workers in Cape Town, 53.5% in Durban, and 71.8% in Johannesburg [21,27]. With such a high prevalence, it is therefore highly critical for sex workers to protect themselves, their clients, and/or intimate partners against STI (including HIV) infection or re-infection by using dual methods of protection (i.e., condoms and other proven preventive measures). However, condoms were not found to be used consistently in this study, either as a result of non-acceptability by clients, perceptions of the sex workers who lose income, or ineffective marketing approaches. Qualitative data supported survey findings [28] on the inconsistency of condom use resonate with findings from this study where participants opted against condom use with clients for higher payment, substance use clouding their judgment, and the inability to negotiate safer sexual practices with spouses for issues related to trust and fear of sexual violence or force from clients or partners [28]. The disadvantages of condom use raised in this study are a common finding in the literature [20]. Opportunity costs for condom use included the fact that not using condoms had a negative impact on their income, as some clients would either leave or offer to pay less if a condom was used [20]. In other settings, sex workers have reported preference for the female condom as they could have it on before meeting a client, thus eliminating the need to negotiate condom use [8,14,20].
A further complication of unprotected sex and the consequent unplanned pregnancies is the risk associated with abortion (often in the informal sector due to being stigmatized) [20]. Even though the number of women with a previous abortion accounted for a fifth (20.3%) of all the participants and is lower than other previous African reports of between 22 and 86%, it is still of concern [20]. This suggests a lack of use of other family planning services available.
Implanon and injectable contraceptives were the most commonly used contraceptives by 52.2% and 34.8% of participants, respectively, suggesting preferences for medium-term (mostly 3-months) to long-term (5-years) contraceptives. It is consistent with other literature findings where sex workers have a lower tendency for oral contraceptives as they require daily intervention, which could result in poor compliance [8,20]. In contrast, though, some female sex workers reject injectable contraceptives as they are considered bad for business due to their associated dizziness, nausea, and menorrhagia, or extended vaginal bleeding [20].
In previous Kenyan studies [20,28], participants preferred effective medium-or longterm contraceptives such as injectable contraception or an implant [20,28]. The individual circumstances of a sex worker often interfered with compliance and the correct use of other methods [20,28]. Risky behaviours, such as being drunk, were some of the common reasons associated with poor condom and contraceptive compliance [20]. Injectable contraceptives, Implanon, and intrauterine contraceptive devices are also beneficial as contraceptives for female sex workers who are raped or those who cannot negotiate safe sex [20,28]. Furthermore, condoms could also tear, thus the need for the additional contraceptive measure [20,28].
Even though adequate knowledge was attained by only 43.5% of the participants, those with adequate knowledge were 90% more likely to report opportunity costs associated with condom use. By inference, sex workers will engage in unprotected sex not due to a lack of knowledge but often due to the cost of not giving the client what he prefers. This also suggests that health promotion strategies and messaging are not reaching their target audience, suggesting the need for a change of tactics for better impact.
Even though minimised, this study is not without limitations. It is not, however, anticipated that these could have given rise to different outcomes. Firstly, there is a selection bias in that the participants were recruited from a risk reduction workshop (a controlled environment) and are therefore likely to be more knowledgeable about safer practices than the general population of sex workers.
Secondly, the use of a researcher-administered questionnaire could have led to a social desirability bias. This was limited by the asking of follow-up questions and the rephrasing of some questions in a different section of the questionnaire to assess reliability. As a result, a participant with inconsistent responses was eliminated. Even though the findings are not generalisable for all South African sex workers due to the small sample size, the study has certainly identified the health needs of this marginalised population, as the findings are internally valid and could be the basis for more detailed deductive qualitative studies and prospective quantitative studies. Furthermore, the study has highlighted the importance of increasing contraceptive uptake and the need to promote female condom acceptability and availability among female sex workers.
---
Limitations
Due to the nature of the work of the participants, it is often difficult to find them in situ. Subsequently, the obtained population size may not be representative. This study also could not investigate in-depth the events leading to the start of sex work. Paucity in literature has also limited the authors' ability to obtain adequate and recent literature.
---
Conclusions
This study has confirmed the low acceptability of female condoms, as manifested by the low usage of female condoms. Though adequately marketed, the effectiveness of marketing on effectiveness and efficacy can be linked to low use, negative perception by sex workers, and unacceptability by clients. Additionally, poverty and high unemployment rates remain challenges in facilitating decisions to engage in sex work. Knowledge of condom use led to a better understanding of barriers to condom use. It becomes imperative to strengthen the approaches used to market condoms to members of the community and improve access in order to improve attitudes and efficacy in their use. Furthermore, the government needs an inclusive approach to dealing with sex work and its associated risks.
---
Data Availability Statement: All data used in the study will be available from the corresponding.
---
Author Contributions: N.S. conceptualized this study, drafted the proposal, collected data, and drafted the first draft of the manuscript. S.C.N. and M.P. advised on survey methods and edited versions of the manuscript. W.W.C. co-supervised the research and edited versions of the manuscript. S.A.M. analysed the data, edited versions of the manuscript, and signed off on the final version of the manuscript. All authors provided feedback on the analytical strategy and drafts. All authors have read and agreed to the published version of the manuscript.
---
Conflicts of Interest:
The authors declare no conflict of interest. | 25,309 | 1,687 |
42853cfb20e60f3e5dbb1033f8346a3b76492fbd | Sexual Assault Is the Biggest Risk Factor for Violence against Women in Taiwan—A Nationwide Population Cohort Study from 2000 to 2015 | 2,022 | [
"JournalArticle"
] | Objective: To understand the main types of risk of violence against women in Taiwan. Materials and methods: This study used the outpatient, emergency, and hospitalization data of 2 million people in the National Health Insurance sample from 2000 to 2015. The International Classification of Diseases, Ninth Revision diagnostic N-codes 995.5 (child abuse) and 995.8 (adult abuse) or E-codes E960-E969 (homicide and intentional injury by others) were defined as the case study for this study, and the risks of first violent injury for boys and girls (0-17 years old), adults (18-64 years old), and elders (over 65 years old) were analyzed. Logistic regression analysis was used for risk comparison. A p value of <0.05 was considered significant. Results: The proportion of women (12-17.9 years old) who were sexually assaulted was 2.71 times that of women under the age of 12, and the risk of sexual assault for girls and adult women was 100 times that of men. Girls who were insured as labor insurance, farmers, members of water conservancy and fishery associations, low-income households, and community insured population (public insurance as the reference group) were significantly more likely to seek medical treatment from sexual assault than adult women. Among them, the risk was greatest for girls from low-income households (odds ratio = 10.74). Conclusion: Women are at higher risk of sexual assault than men regardless of whether they are children or adults, and the highest risk is for women in senior high schools, especially for girls from low-income households. Therefore, the protection of women's personal autonomy is the direction that the government and people from all walks of life need to continue to strive for. Especially for high school students from low-income households, protection must be strengthened through education, social work, and police administration. | Introduction
The World Health Organization (WHO) defines partner and non-partner sexual violence separately, with partner sexual violence being defined as the self-reported forced engagement in sexual activity by a current or ex-partner from age 15 despite their unwillingness due to fear that their partner might act unfavorably during sexual intercourse or being forced to do something that is humiliating or degrading; non-partner sexual violence is defined as being 15 years of age or older when someone other than a person's husband/partner is forced to perform any unwanted sexual act [1]. The revelation of sexual violence often creates shame and stigmatization of the victim; the perpetrator shames and blames the victim to reduce their responsibility, and a climate of stigma in sociocultural perceptions develops; in this case, most victims opt not to report their experiences or may not describe what happened to them as sexual violence [2]. WHO defines sexual abuse during childhood and adolescence (child sexual abuse (CSA)) as, "the involvement of a child in sexual activity that he or she does not fully comprehend, is unable to give informed consent to, or for which the child is not developmentally prepared and cannot give consent, or that violates the laws or social taboos of society; CSA is evidenced by this activity between a child and an adult or another child who by age or development is in a relationship of responsibility, trust, or power, the activity being intended to gratify or satisfy the needs of the other person" [3].
An important issue of sexual violence is the relationship between the victim and the perpetrator, and recent research has focused on the sexual violence between intimate partners, whether committed by a partner or a non-partner to the victim. Often traumatic, the pattern, extent, and effects of violence may vary by perpetrator [1][2][3]. The occurrence of spousal violence depends on determinants at the individual and environmental levels, with unemployment, poverty, and literacy having a significant impact on spousal violence against women [4]. Transgender and non-binary youths are exposed to significantly more violence compared to women and men. Experiences of sexual risk taking and ill health demonstrated strong associations with exposure to multiple violence [5].
Although most previous research has focused on the impact of domestic violence on women, a few studies have focused on the characteristics of adolescent girls and adult women who experienced sexual violence [6]. In Taiwan, no in-depth study has been conducted on this issue. Therefore, we hypothesized that sexual assault is the biggest risk factor for violence against women in Taiwan. This study intended to understand for the first time the main types of risk of violence against women in Taiwan through the National Health Insurance Research Database (NHIRD).
---
Materials and Methods
---
Data Source
Taiwan's National Health Insurance launched a single-payment system on 1 March 1995. As of 2017, 99.9% of Taiwan's population had participated in the program. This study was a 16-year observational research and used the NHIRD to provide a representative NHIRD 2000 coverage sample of 2 million people for the parent cohort (Longitudinal Health Insurance Research Database, LHID2000) as the research data source, tracking data on new cases for 16 years from 1 January 2000 to 31 December 2015. The files used were "Outpatient Prescription and Treatment Details File", "Inpatient Medical Expense List Details File", and "Insurance Information File". Violent abuse research cases included 11,077 people. The National Institutes of Health encrypted all personal information before releasing the LHID2000 to protect the privacy of patients. In LHID2000, the disease diagnosis code was based on the "International Classification of Diseases, Ninth Revision, Clinical Modification" (ICD-9-CM) N-code standard. Cases that occurred in 2000 were excluded. Figure 1 shows the research-design flow chart of this study.
All procedures involving human participants performed in the research complied with the ethical standards of the institution and/or the National Research Council and the 1964 Declaration of Helsinki and its subsequent amendments or similar ethical standards. All methods were carried out following relevant guidelines and regulations. The Ethical Review Board of the General Hospital of the National Defense Medical Center (C202105014) approved this study. All procedures involving human participants performed in the research complied with the ethical standards of the institution and/or the National Research Council and the 1964 Declaration of Helsinki and its subsequent amendments or similar ethical standards. All methods were carried out following relevant guidelines and regulations. The Ethical Review Board of the General Hospital of the National Defense Medical Center (C202105014) approved this study.
---
Participants
Defined children and adolescents who have suffered from violence (victims of violence) refer to minors under the age of 18 and who have joined the National Health Insurance for medical treatment. The scope, according to the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9 CM) N-code: 995.5 and the definition of the external classification codes (E-code): E960-E969, of violently abused adult includes those who are 18-64 years old, whereas violently abused elderly refers to those 65 years of age and above, according to the ICD-9 N-code: 995.8 and E-codes E960-E969, as the case group (victims of violence). The control group consisted of people who did not suffer from violence (victims of violence). People in the case and control groups were matched in terms of index date, gender, and age at the ratio of 1:4.
The insured identity information came from the "unit attribute" variable of the underwriting file. The grouping method considered the original code, the data science center results' carry-out requirements, and actual data distribution. The cases were divided into seven groups, namely, Group 1: "public insurance"; Group 2: "labor insurance"; Group 3: "Farmers"; Group 4: "Members of Water Conservancy and Fisheries Association"; Group 5: "Low-income households"; Group 6: "Community Insured Population"; Group 7:
---
Participants
Defined children and adolescents who have suffered from violence (victims of violence) refer to minors under the age of 18 and who have joined the National Health Insurance for medical treatment. The scope, according to the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9 CM) N-code: 995.5 and the definition of the external classification codes (E-code): E960-E969, of violently abused adult includes those who are 18-64 years old, whereas violently abused elderly refers to those 65 years of age and above, according to the ICD-9 N-code: 995.8 and E-codes E960-E969, as the case group (victims of violence). The control group consisted of people who did not suffer from violence (victims of violence). People in the case and control groups were matched in terms of index date, gender, and age at the ratio of 1:4.
The insured identity information came from the "unit attribute" variable of the underwriting file. The grouping method considered the original code, the data science center results' carry-out requirements, and actual data distribution. The cases were divided into seven groups, namely, Group 1: "public insurance"; Group 2: "labor insurance"; Group 3: "Farmers"; Group 4: "Members of Water Conservancy and Fisheries Association"; Group 5: "Low-income households"; Group 6: "Community Insured Population"; Group 7: "Other + Missing Values"; Group 7: "Others" (including religious people, other social welfare institutions, veterans, and others).
The cause of injury in this study was identified in E-codes 960-969 (see Appendix A for details). The groups were combined based on the number of people that complied with the regulations of the Data Science Center of the Ministry of Health and Welfare. The main groups after grouping were, "Grapple, fighting, Sexual Assault (E960)", "Injury by Cutting Tools (E966)", "Children and Adults Persecuted and Abused (E967)", and "Injured by Blunt Objects or Dropped Objects (E968.2)". "Grapple, Fighting, Sexual Assault (E960)" was subdivided into "Unarmed combat or fighting-(E960.0)" and "Sexual Assault (E960.1)".
"Persecuted and abused children and adults (E967)" was subdivided into "Persecuted by father, stepfather or boyfriend (E967.0)" and "Persecuted by spouse or partner (E967.3)", and the rest were classified as "Persecuted by others (E967.1), E967.2, E967.4-E967.9)". The rest of the injuring methods were classified as "injured by other methods (E961-E965, E968.0, E968.1, E968.3-E968.7)".
---
Statistical Analysis
This study used the SAS 9.4 statistical software for Windows (SAS Institute, Cary, NC, USA) provided by the Academia Sinica Branch of the Data Welfare Center of the Ministry of Health and Welfare for the analysis. The descriptive statistics were expressed in the form of percentages, averages, and standard deviations, and the chi-square test was used to compare the differences among the three groups (children, adults, and elderly). Differences in the cause of injury and the proportion of women who suffered from sexual assault among different age groups were determined. In addition, logistic regression was used to analyze the risk of sexual assault for women in different age groups or with various occupations (including dependent occupations). According to the central limit theorem, (a) if the sample data are approximately normal, then the sampling distribution will also be normal; (b) in large samples (>30 or 40), the sampling distribution tends to be normal regardless of the shape of the data; (c) the means of the random samples from any distribution will themselves have a normal distribution [7]. A p value < 0.05 was considered to be statistically significant.
---
Results
During the 15-year period, 1592 children, 8726 adults, and 759 seniors were injured by violence and sought medical treatment. Among them, 301 children, 217 adults, and 0 seniors were sexually assaulted. Sexual assaults accounted for all the injuries. The proportions in each generation were 18.9%, 2.5%, and 0%, respectively, and the proportion of children suffering from sexual abuse was significantly higher than that of adults (Table 1). Although very few men were sexually assaulted, six cases occurred in childhood and five in adulthood. Among female victims of violence, the proportions of injuries caused by sexual assault were 38.9%, 7.4%, and 0% in each generation; the proportion of those injured by unarmed combat or fighting rose to 24.5%, which was significantly higher than that of adult women, whereas 18.4% and 5.5% were observed for older women and girls, respectively (p < 0.0001) (Table 2). The highest rate of sexual assault was observed among women 12-17 years old (54.8%), which is 2.71 times that of women under 12 years old. In addition, women aged 24-44 and 45-64 years old are more likely to be sexually assaulted than girls under 12 years old, who are less vulnerable to sexual assault (Table 3). Girls and adult women were 100 times more likely to be sexually assaulted than men (p < 0.001). Senior age students (12-17 years old) were 2.5 times more likely to be sexually assaulted than junior age students (6-11 years old) (p = 0.003). For children and adolescents, and adults who were sexually assaulted, the risks of aggression were 11.4 (p < 0.001) and 2.51 times (p < 0.001) higher than that of the elderly, respectively (Table 4). Girls who were insured as labor insurance, farmers, members of water conservancy and fishery associations, low-income households, and community insured population (public insurance as the reference group) were significantly more likely to seek medical treatment from sexual assault than adult women. Among them, the risk was the greatest for girls from low-income households (OR = 10.74) (Table 5).
---
Discussion
---
Importance of This Study
The results of this study revealed that the children and adolescents suffering from violence and seeking medical treatment accounted for the largest proportion, and the proportions of children and adolescents suffering from sexual abuse were significantly higher than that of adults. The proportions of children and adolescents suffering from sexual abuse were the majority. Women aged 12-17 years old were 2.71 times more likely to be sexually assaulted than women under 12. High school students (aged 12-17 years old) were 2.5 times more likely to be sexually assaulted than primary school students (aged 6-11 years old). Young people (18-23 years old) and adults (24-44 years old) were 11.4 and 2.51 times more likely to be sexually assaulted than middle-aged people (45-64 years old), respectively. The risk of sexual assault for girls in low-income households is greater than that of adult women (OR = 10.74). Therefore, sexual assault is the biggest risk factor for violence against women in Taiwan.
The most common forms of violence against women are domestic abuse and sexual violence [8]. Nearly 3 in 4 children or 300 million children aged 2-4 years regularly suffer from physical punishment and/or psychological violence at the hands of parents and caregivers. Exactly 1 in 5 women and 1 in 13 men reported having been sexually abused as a child at the age of 0-17 years. A total of 120 million girls and young women under 20 years of age have suffered from some form of forced sexual contact [9]. A study of grade 10 students in Iceland showed that 15% of them experienced some form of abuse, and two-thirds experienced abuse more than once [10]. A Swiss study noted that 40% of girls and 17% of boys reported CSA [11]. In a Swedish study, 65% of girls and 23% of boys reported CSA [12], which is consistent with our study.
Numerous studies have demonstrated the impact of poverty or low socioeconomic status (SES) on adolescent development and well-being [13][14][15]. A recent report from the Health Behavior in School-Aged Children study showed that disparities in household affluence continue to have a significant impact on adolescent health and well-being [16]. These findings suggest that adolescents from low-income households have poorer health, lower life satisfaction, higher levels of obesity and sedentary behaviors, weaker communication with parents, less social interaction through social media, and less social interaction from friends and family [17]. Many of these inequalities will have lasting lifelong effects. The findings suggest that these inequalities may be increasing, with widening disparities in several key areas of adolescent health [16,17].
In regard to sexual abuse in adolescents, a few studies have focused on the relationship between economic status (poverty or affluence) and CSA, and the results have been inconsistent [18]. The research has found poverty to be a risk factor for sexual abuse, Sedlak et al. reported that children from families with low SES were twice as likely to experience sexual abuse and three times as likely to be endangered than children from families with higher SES [19]. In their recent study, Lee et al. reported a high risk of severe and multiple types of abuse, including sexual abuse, for children experiencing poverty during childhood. This condition also affects the overall health in adult years, especially for women [20]. However, Oshima et al. found no significant difference in the CSA rates between more affluent and poor families, but a significant difference was reported between poor victims and wealthier victims of childhood sexual abuse for repeated reports of maltreatment to child protective services [21]. A few studies have looked at sexual abuse in low-income households and adolescence. Several research have shown that the least affluent adolescents reported a higher risk of sexual abuse [19,20], whereas one study reported no significant difference in CSA rates between non-poor and poor households [20,21]. Differences in these findings may stem from the differences in research methodology given that Oshima et al.'s data were derived from CSA reports from child protective services [19][20][21].
A low SES is an indicator of social disadvantage; for women, it may independently lead to the risk of sexual abuse. The double-harm hypothesis proposes that two or more concurrent sources of social disadvantage may interact to produce particularly negative outcomes. Therefore, the detrimental effects of SES may be more effective in adolescent girls than in boys [22]. The results of this study support this line of thinking.
From the perspective of violent criminology, countries attempt to prevent violence against women by formulating laws related to sexual assault [23]. However, the ineffectiveness of the law and the question of appropriateness still cannot effectively prevent women from suffering from violence; The Domestic Abuse Act of 2021 expanded the legal system's role in dealing with domestic violence, made common assault an arrestable offense for the first time, and strengthened civil laws related to domestic violence to ensure that common-law partners of any gender and couples of any gender who have never been married or do not live together receive the same non-harassment and work order as married individuals [23].
Young people are the most frequent victims of sexual violence, with 12% to 25% of girls and 8% to 10% of boys under the age of 18 being thought to experience sexual violence [24]. In addition, CSA is associated with an increased risk of dating violence in all three forms (psychological, physical, and sexual) among boys and girls [25]. Sexual violence is more likely to occur among young people, women, people with disabilities, and those who have experienced poverty, childhood sexual abuse, and substance abuse [26,27]. Parental addiction, parental mental illness, and exposure to domestic violence, both individually and cumulatively, have been associated with CSA [28].
The shocking incident of two women being kidnapped and murdered in Taiwan at the end of 2020 prompted the passage of the "Stalking and Harassment Prevention Law" [29,30]. Violence against women and girls, irrespective of their social status and cultural level, remains prevalent throughout the world [30]. Previous investigations in Taiwan have noted that sexual assault victims between the ages of 12 and under 18 were the most common age group in 2006-2015 [29] but did not specifically identify low-income households' girls as the most at-risk group [30]. Our study compared the risk of sexual assault between girls and adult women and pointed out that the risk of sexual assault for girls from low-income households in Taiwan is 10.74 times that of adult women.
In Taiwan, according to the latest "Statistical Survey on Intimate Relationship Violence of Taiwanese Women" released by the Ministry of Health and Welfare, 20% of women have been subjected to violence by an intimate partner, of which mental violence is the most common, whereas sexual violence has doubled compared with previous surveys [30]. A slight increase has also been observed in harassment, which is a form of violence in intimate relationships and needs attention in the future [30]. In 2021, a woman was stalked and harassed in Taiwan [30]. When no legal basis and no way to seek help were found, an unfortunate incident finally occurred, which led to the passage of the third reading of the "Stalking and Harassment Prevention Act", making Taiwan a legal basis for the protection of women's rights and interests [30].
---
Cause of High Risk of Sexual Assault among High School Girls
This issue needs to be discussed from the criteria for determining sexual abuse. The condition for CSA must be that the child does not have a genuine consent [31]; however, the consensual behavior of boys and girls in the case of mutual consent still constitutes a constitutive element of sexual assault in terms of legal standards [31]. Therefore, when medical personnel in Taiwan are faced with sexual assault cases under the age of 18, they are required to log in the sexual assault code and report according to law [31].
Previous studies indicated that adolescents who suffered from sexual assault were mostly younger than 14 years old, whereas this research showed that the high-risk group for sexual assault included high school girls aged 13-17 years old, which is consistent with the female sexual maturity age [32]. The Swedish survey revealed that sexual violence accounted for 16.3% before the age of 18, and 10.2% of women experienced/attempted sexual assault in adulthood [33]. Perpetrators consisting of uncles and stepfathers were more common among adolescents and partners or ex-intimate partners of adult women; in most cases, sexual assaults occurred in public places, although sex crimes at the perpetrator's residence were more frequent among adolescents [32]. The 2008-2020 Sexual Assault Notification Case Investigation in Taiwan provides the age distribution of sexual assault victims and perpetrators. Over the years, most of the victims were 12 to under 18 years old, and most perpetrators were 12 to under 18 years old and 18 to under 24 years old [34].
Feminist scholars reject biological and essentialist explanations, arguing that gender inequality is the driving force behind sexual violence against women [35]. Sanday, who first proposed the theoretical framework of sexual violence, believed that sexual assault was used as a means to control and dominate women to maintain the hierarchical status of men [36]. However, such a theoretical framework cannot fully explain the difference in the risk of sexually assaulted girls and adult women. In the work of, a more reasonable explanation can be obtained from the following three factors: (1) low-income girls face more capable and criminally motivated offenders than adult women; (2) low-income girls and adult women are more suitable targets for sexual violence crimes; (3) girls from lowincome households are more likely to face the absence of suppressors who can stop the crime [37]. When the above three conditions all develop in an unfavorable direction, the risk of sexual assault for girls in low-income households is 10.74 times that of adult women, and the risk of sexual assault for girls in other insurance statuses is also higher than that of adult women.
This study has several limitations. First, Taiwan's National Health Insurance database employs the practice of delaying the release of data for two years. Moreover, from 2016 to 2018, the data will face the problem of changing the data code from ICD9 to ICD10, which may cause a deviation in the code conversion. Second, Taiwan's National Health Insurance database lacks information on personal factors, such as marriage, education level, and living habits. The problem of child marriage is not evident in Taiwan, and in Taiwanese women's secondary education (12-17 years old) from 2000 to 2015, the rate was 93.49-96.28%. Thus, the lack of the above variables had little impact on this study. Third, the occupational classification of the health insurance database is not in accordance with the classification required for the research, and a more detailed classification cannot be obtained. However, the identification of low-income status is recognized by the relevant Taiwan authorities and has credibility. However, researchers cannot avoid the low-income status. Finally, after years of promotion in Taiwan, the prevention and treatment of sexual assault in Taiwan now bears a standard medical procedure, and child sexual assault is now a public prosecution crime. Therefore, despite the possibility of bias, the researchers believe that the associated range is small.
---
Conclusions
Our results showed that regardless of whether women are children or adults, the risk of sexual assault is higher than that of men, and women in national high schools are at the highest risk, especially girls from low-income households. These results highlight the vulnerability of children, especially women, living in low-income households to CSA. They also underscore the urgency of financially supporting the children of these lowincome households, given the severity of the impact of CSA on the future health and well-being of victims. Therefore, the protection of women's personal autonomy is the direction that the government and people from all walks of life need to continue to strive for. Politicians and health professionals, welfare, and education play an important role in supporting low-income children and their families. For high school students from lowincome households, their protection must be strengthened through education, social work, and police administration.
Future studies should compare the impact of pre-coronavirus disease (COVID-19) versus post-COVID-19 sexual violence issues, such as those in 2016-2020 versus 2020-2024, after the update year is released given that COVID-19 is expected to exacerbate this phenomenon.
---
Data Availability Statement: Data are available from the NHIRD published by the Taiwan NHI administration. Because of legal restrictions imposed by the government of Taiwan concerning the "Personal Information Protection Act", data cannot be made publicly available. Requests for data can be sent as a formal proposal to the NHIRD (http://www.mohw.gov.tw/cht/DOS/DM1.aspx?f_list_ no=812 (accessed on 13 October 2021)).
---
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of Tri-Service General Hospital (C202105014).
---
Informed Consent Statement: Not applicable.
---
Conflicts of Interest:
The authors declare no conflict of interest. Perpetrator child and adult abuse by mother, stepmother or girlfriend E967. 3 Perpetrator child and adult abuse by spouse or partner E967. 4 Perpetrator child and adult abuse by child E967. 5 Perpetrator child and adult abuse by sibling E967. 6 Perpetrator child and adult abuse by grandparent E967. 7 Perpetrator child and adult abuse by other relative E967. 8 Perpetrator child and adult abuse by non-related caregiver E967.9
---
Appendix A
---
Homicide and injury purposely inflicted by other persons (E960-E969) [38].
---
E-Code
Perpetrator | 26,581 | 1,886 |
8e767bffa827db87b1aeca52de0e5ab1624b533c | Aging and Financial Planning for Retirement: Interdisciplinary Influences Viewed through a Cross-Cultural Lens | 2,010 | [
"JournalArticle",
"Study"
] | Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. |
• Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Download date: 07. May. 2024 | 534 | 277 |
8ad5cbbb8bdcfad744b2000364c3c2b4f3dd870b | Reducing the risk of heart disease among Indian Australians: knowledge, attitudes, and beliefs regarding food practices – a focus group study | 2,015 | [
"JournalArticle"
] | Background: Australia has a growing number of Asian Indian immigrants. Unfortunately, this population has an increased risk for coronary heart disease (CHD). Dietary adherence is an important strategy in reducing risk for CHD. This study aimed to gain greater understanding of the knowledge, attitudes and beliefs relating to food practices in Asian Indian Australians. Methods: Two focus groups with six participants in each were recruited using a convenience sampling technique. Verbatim transcriptions were made and thematic content analysis undertaken. Results: Four main themes that emerged from the data included: migration as a pervasive factor for diet and health; importance of food in maintaining the social fabric; knowledge and understanding of health and diet; and elements of effective interventions. Discussion: Diet is a complex constructed factor in how people express themselves individually, in families and communities. There are many interconnected factors influencing diet choice that goes beyond culture and religion to include migration and acculturation. Conclusions: Food and associated behaviors are an important aspect of the social fabric. Entrenched and inherent knowledge, attitudes, beliefs and traditions frame individuals' point of reference around food and recommendations for an optimal diet. | D
espite advances in medical technology and pharmaceutical therapy, coronary heart disease (CHD) remains the leading cause of mortality and morbidity worldwide (1). Asian Indians, in particular, have been found to have higher levels of risk for CHD both in country of origin, as well as those who make up the diaspora. In addition, Asian Indians experience a first myocardial infarction at a much younger age (2) and mortality because CHD is five-to tenfold higher in those aged under 40 years (3).
Adherence to dietary best practice recommendations is, among other critical factors, essential for primary and secondary prevention of CHD (4). Recommendations involve consumption of avaried diet high in wholegrain cereals, fruit and vegetables, foods low in salt, and limited consumption of saturated fats, sugar, and foods containing added sugars. In addition, the Australian Guide to Healthy Eating provides people with a visual representation to assist the selection of healthy foods (5). Measuring diet empirically, however, is difficult particularly in terms of accurately reporting intake in relation to overeating and disease (6).
---
Indians and diet
Dietary customs and habits among Asian Indians are varied depending on their region of origin in India, cultural, and religious beliefs (7). The traditional Indian diet is carbohydrate dense and lacks high-quality protein ), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license. and antioxidants (8). In addition, the Indian diet comprises large amounts of added sugars (9,10), large portions (11), and late dinners.
Dietary patterns associated with immigration and acculturation may contribute to a higher risk of heart disease among Asian Indians (12). Asian Indians make up a steadily growing diaspora estimated to be over 20 million worldwide (13). In 2011, Indian-born Australians were the second largest overseas-born Asian group in Australia, and the fourth largest overseas-born group overall (14). In addition to permanent migrants, Australia has a large temporary Asian Indian population in the form of tertiary students coming to study in Australia. Given the evidence of increased CHD among Asian Indians particularly at a younger age, health professionals, and nurses in particular, can provide dietary education to reduce the risk of heart disease in this population.
To date, little research has been done on the complex interplay of psycho-socio-biological factors on food practices in Asian Indians. In particular, no research has been done on the migrant population of Asian Indian Australians. Therefore, understanding the knowledge, attitudes, and beliefs influencing food practices is vital in order to develop culturally appropriate interventions for diet-related behavior change. Rather than measuring these constructs quantitatively, a more in-depth qualitative approach may provide greater insights into the ways these factors interact.
This paper reports and discusses the findings of a qualitative study into knowledge, attitudes, and beliefs relating to food practices and strategies for the prevention of heart disease among Asian Indian Australians.
---
Methods
---
Participants
A convenience sample of migrant Asian Indian Australians who took part in a larger risk factor profile study was recruited to participate in a focus group. Focus groups are particularly useful for exploring complex issues about which little is known. However, it is the dynamic of social interaction among the participants of focus groups that helps elicit rich data through the mutual support members' experience. That support aids deep discussion and therefore rich findings (15,16).
Using a convenience sample, participants who agreed to take a survey associated with the larger study (17) were invited to attend a focus group. Inclusion criteria included adults who identified themselves as Asian Indians who were either migrants to Australia, or born to Asian Indian migrants living in Australia. The sample consisted of Asian Indian Australian adults who were capable of conversing freely in English. This was important as participants came from varying Asian Indian language groups.
To provide equal access, the English language was chosen. Furthermore, a high level of English language skills is known to exist in the Asian Indian population (18). The focus groups were conducted at the university after consultation with participants to ensure that the venue was suitable to them.
---
Data collection
Community leaders of local cultural associations and members of the Indian Medical Associations were contacted from a list provided by the Consul General of India to participate in a larger study titled 'Heart Health and Well being among Asian Indians Living in Australia' Á a study undertaken to develop and implement an evidencebased intervention to reduce the risk of heart disease among Asian Indians. Participants were contacted by email and invited to participate in this study if they had previously provided the researchers their contact details as part of the larger study, and indicated an interest in participating in a focus group. A letter providing details of what was required during the focus group session was sent out along with a form to establish the most convenient time and location for the focus group. An information sheet was also sent out outlining the key areas of discussion regarding the Asian Indian community relating to practices about wellness, heart health, and preventive health, the various health and community resources available, and what they perceived to be the components of an effective program for reducing the risk for heart disease. Written consent was obtained from each participant prior to conducting the focus groups.
---
Measures
Open-ended questions were developed based on the risk factor data obtained from a substudy of the Heart Health and Well Being among Asian Indians Living in Australia Project. These questions related to their perception of their food practices, its impact on heart health, and strategies that would enable people in their community to eat healthy. Focus groups were conducted using a dual moderation approach as difficulties have been noted in single-moderator studies where the moderator is required to ask questions as well as keeps field notes (19). During the focus group sessions, feedback methods were used by the moderators to reflect back to the participants pertinent issues raised during the dialog. Each focus group lasted for 90 min, and data saturation was reached following the second focus group. Each focus group was digitally audio-recorded and transcribed verbatim to allow independent analysis by the research team. Field notes were compiled by each facilitator for inclusion in the analysis. Following the focus group sessions, the moderators met to debrief, note any common themes, and discuss the field notes gathered during the session. These notes informed the data analysis process.
---
Analysis
Following transcription, data were analyzed for emergent themes and subthemes. Three researchers independently analyzed the data and later discussed their results before arriving at a consensus of the essential themes and subthemes (20). Exemplars were selected to illustrate emergent themes and subthemes.
Ethical approval for the study was obtained from the University of Western Sydney Human Research Ethics Committee (approval no.: H8403).
---
Results
Two focus groups were held with a combined sample of 12 participants (Group 1: 6 and Group 2: 6). The majority of the participants were male (n09). Participants had migrated from South, North, and Northeast India thus representing several subcultures of the region.
Participants were aged between 35 and 70 years, all had completed a Bachelor's degree and only one was born in Australia. Two participants were retired but were actively working in their community groups. Participants in the study included a general practitioner, accountants, dietitians and financial advisors. Their length of stay in Australia ranged from 5 to 40 years.
The main themes that emerged from the focus groups included (1) migration as a pervasive factor for diet and health; (2) the importance of food in maintaining vital social fabric; (3) knowledge and understanding of health and diet; and (4) preventing heart disease and improving health.
---
Migration as a pervasive factor for diet and health
Indian Australians identified the challenges of migration as negatively influencing dietary practices and health. Subsumed within this theme were challenges relating to stress and under-employment, loss of the extended family, and financial pressures.
---
Stress and under-employment
Migration as a source of considerable stress was discussed by the majority of participants. The stress associated with migration, particularly for skilled migrants, was substantial. Participants discussed the challenge migration presented in terms of affordability for living in the new environment. One aspect was under-employment of professionally trained migrants. This under-employment had a perceived impact on the health of the family, particularly the dyad as both husband and wife needed to work. As one participant expressed:
[P]: . . . when we came here the whole thing changed, the whole place changed . . . the women had to look for a job, the children are neglected, the food were prepared in haste.
The introduction of fast food was implicated in this process and, therefore, the westernization of their diet leading to a perceived reduction in health. One participant stated:
[M2]: . . . when I was working . . . lunch becomes a fast food type of thing . . . you get used to this . . . if it's pizza or whatever you want to eat you see, or McDonalds.
---
Access to low-cost low-nutrient foods
Migration also added financial pressures to new immigrants leading to increased risk factors for heart disease. In particular, unhealthier food choices being cheaper than more nutritious foods were demonstrated in the discussions. Examples included ice cream, pizza, and beer all being cheaper to consume. As one participant described his transformation upon migration:
[J]: . . . basically I never had ice cream when I was in India. I came here as a student and I found ice cream was the cheapest to eat . . . You know I'm not joking, when I came to Australia I was only 75 kg. In three years I was 120 kg and now I'm 100 kg.
Another participant echoed the above comments elucidating on the link between diet and exercise:
[P]: . . . when I came here I was 128 pounds, now I'm 228 [pounds]. You didn't take out what you put in, and you didn't walk too long, we use the cars. Back home we used to walk to work. The drinks [beer] were so cheap when I came here . . . buy a carton of beer for about $5.99 . . . we used to drink a carton a week.
---
Loss of the extended family
The loss of the extended family as a major social support was identified. The notion of family had to be redefined by including friends as surrogates for that loss of extended family. As one participant stated:
[J]: . . . Back home in India . . . my grandmother was the one who took care of me. So I was getting proper food and not like you know fast food kind of thing.
Participants identified differences between traditional dietary practice and post-migration practice. For example, the number of times a person eats per day has changed. Prior to immigration, it was common to eat several small meals per day [P]: We have the habit of eating five meals a day, when we came here we just eat three meals because we don't even have time.
---
Importance of food in maintaining vital social fabric
Participants discussed the role food played within the traditional contexts of family and community. From the patterns of meals and communal eating to maintaining social cohesion, food was seen as integral to Asian Indian culture. As one participant discussed:
[M]: . . . almost every weekend we socialize. So when we invite somebody, we have all those items [food] and we eat as much as possible.
Beliefs around the importance of types of foods during social events were also expressed including 'sweets' as a culmination to a meal. Two participants illustrated this well:
[L]: . . . some have a sweet tongue . . . without . . . sweets they are not satisfied.
Responding to this comment, another participant stated:
[K]: The meal isn't over.
---
Knowledge and understanding of health and diet
When asked of their knowledge of the connection between diabetes and heart disease, some participants were not aware of the link. One participant made the following comment:
[J]: . . . My family, there's nobody with heart disease. But with diabetes yes. But to be frank with you I still eat sweets and I don't think that will be a problem in my life.
Other misconceptions about health, diet, and heart disease were also expressed, in particular, the issues related to risk factors for heart disease. Being overweight was not necessarily seen as a health-negative issue. Participants discussed the cultural aspects of this notion. When asked how the community in general regarded being overweight, one participant who was a health professional stated:
[L]: I think they disregard it probably . . .. They know, 'I am overweight', but still when they see a piece of sweets they forget about it [the weight].
Fatalism regarding health and health outcomes was noted by participants. This was expressed more in terms of comparing the apparent irony of a person of advanced age with multiple risk factors yet appearing well.
[J]: . . . people tend to compare instances, for example say so and so . . . had no problems . . . still he passed away at 50. [another] person was having all sorts of problems, overweight, diabetic and what not, still he is 90 he is still going strong . . .
---
Preventing heart disease and improving health
Participants had much to say about aspects of interventions that may improve health and dietary outcomes. These centered on the family, community, and the use of media.
---
The family as a driver for change
The family was singled out specifically as an important unit for primary and secondary prevention strategies. In particular, the woman's role within the household was emphasized as she was considered the primary preparer of food:
[M]: . . . in our community, food is normally prepared by the lady at home . . . so awareness of those [issues] to the women is more important . . . for instance my wife, she decides what she should cook and how she could cook.
Women were also the ones considered the most knowledgeable concerning dietary issues. One participant commented:
[M2]: . . . my wife is more conscious about health issues than I am . . .
---
Community empowerment
The Asian Indian Australian communities were also identified as important contexts for heart disease prevention interventions. One participant stated clearly:
[M2] . . . awareness and education within the community [Asian Indian Australian] is something which we need to do.
Discussion included the use of cultural fairs, religious settings, such as Hindu and Sikh temples, and community settings such as grocery stores and restaurants. By way of example, the following participant encouraged the use of cultural fairs emphasizing the large numbers of community that attend:
[M]: . . . a good number attend. I mean you cannot cover all the community . . . but majority . . . around 25,000 people . . . that's a big number that you can get at one place.
Religious settings featured as alternative contexts for interventions. The deeply integrated nature of religion with everyday life was emphasized. As one participant expressed:
[M2]: . . . the number of temples which have come up in Sydney since I came here . . . they [the communities] may go for social events and other things, but here the religious thing is a very important thing.
Although the majority of the participants were Hindus, other religions including Sikhism, Buddhism, and Christianity were also discussed. The emphasis was on the role of religious gathering as a context for potential intervention.
Other settings included shopping centers, in particular, culturally-specific shopping areas frequented by Asian Indian Australians.
[M2]: Maybe community grocery shops you see, not the supermarkets so much because they may not provide that type of thing [dietary intervention].
Like other settings, the timing of delivering such an intervention in the community grocery context was considered important.
[J]: Especially the weekends, because on weekends is when much of [the] people go there [Asian groceries].
---
Media as a change agent
Media was the third identified area for focus in developing and delivering a dietary-related intervention with particular emphasis on television and radio.
[M2]: You've got SBS radio now, a Hindi program . . . they've started a new service . . . disability which is again an educational awareness thing.
Print media was also discussed. The many Asian Indian languages and dialects were discussed. However, the provision of health information in the most common languages used by the Asian Indian Australian community was seen as important along with the frustration that the government bodies have little understanding of the complex linguistic needs of the Asian Indian communities.
[M2]: Within our community we've got about 12 languages or even more . . . 18 languages. Unless you have . . . language specific booklets, information ones, they won't understand . . . for instance, we persuaded the health department to produce handbooks in Tamil language, which is again a major language. They thought only Hindi was a major language.
Aging members of the community were singled out as particularly in need of linguistically-diverse material.
[M2]: Older people need it . . . it's an aging community you see.
Language was also considered important when engaging health professionals. The example of a dietician was mentioned.
[L]: . . . in my medical centre we've got a very good dietician, where we send the majority of our people . . . They speak the same language too, so it's very easy for them.
Emphasis was also placed on the temporary migrants including students and the relatives that visit regularly. The following expresses the concerns of the participants well:
[L]: Even more . . . there are a lot of students [that] have come here, have got permanent residency, and their parents are coming regularly . . . most of them are visitors, but they come every year 'cause they've got 10 year permit. They require this type of help in the local language.
---
Discussion
This paper presents the findings from a focus group study of Asian Indian Australians and their perceptions of heart disease and diet. The findings from this study provide insight into the challenges of achieving improved cardiovascular health outcomes amidst misconceptions regarding what constitutes a healthy diet. Migration as a substantial catalyst for diet change and subsequent impacts on cardiovascular health is a key finding of this study. In addition, while Asian Indians have similar anthropometric characteristics, cultural, linguistic, and religious attributes remain quite heterogeneous (21) and have a profound and wide-ranging influence on perceptions of health, heart disease, and dietary practice.
An important insight from this study involves how culture forms a vital factor in determining dietary behaviors, as well as how its potential disruption through migration and subsequent acculturative stress can adversely impact on cardiovascular health. This finding is congruent with that reported in other literature on migrant health (22,23). Asian Indians have their own culturally-based diets and dietary habits comprising mainly of carbohydrate dense foods (24). Biculturalization due to migration could result in consumption of both Indian and Australian food (25) and not replaced with each other. For example, rice and roti continue to be consumed as the main meal, and pizza and burgers as snacks resulting in an even denser carbohydrate diet. These dietary behaviors place the already at-risk Asian Indian population at an even higher risk of cardiovascular disease.
Under-employment and changes to the patterns of how income are brought into the family unit add to the challenges to adaptation to a new environment and consequently on health. In this study, participants expressed concern about unemployment and under-employment and how it affected affordability for living in a country rated as one of the most expensive in the world (26). In a recent survey conducted in Australia, approximately one-fifth of the skilled migrants were either unemployed or under-employed at 6 months following migration, which supports the findings obtained in this study (27). Similar findings have been reported, in particular, where economic hardship hinders healthy adaptation to the new country leading to acculturative stress and a lower self-reported health status. Costs for fresh foods continue to be higher than so-called 'fast-food' or 'take-away' food resulting in economically-disadvantaged populations opting for the more affordable yet less healthy 'fast-food' options.
Health, itself as a construct, is seen from a perspective diverse from that of the dominant Anglo-Saxon Australian point of view. In particular, obesity was not readily perceived as a health-related issue. This finding is interesting as abdominal obesity is a well-established risk factor for heart disease (28), and specific cutoffs for abdominal obesity in Asian Indians (29) have been developed to initiate early management.
In addition, a sense of fatalism governed the perceptions of health. Fatalism describes a belief system where the individual's locus of control over health behavior is externalized (30). Other studies into Asian Indian populations have reported similar issues (17,31,32).
Participants identified the role of the family and community as important factors in developing future interventions. This itself is in keeping with the importance of family and wider social cohesion as a determinant of health itself (33). The role of women as providers of meals within the family was identified specifically. In Indian society, men may cook, however, women are generally responsible for everyday cooking. A number of the participants stated that it was the responsibility of the wife or mother to shop and cook, therefore, undertaking further research in this group is vital. In addition, establishing a gender-sensitive approach to education regarding food selection and meal preparation is warranted.
Community approaches to dietary health promotion including media and places of worship were also emphasized. Given the cohesive nature of the Asian Indian Australian communities, such approaches may prove efficacious. Evidence from the literature (34) supports the use of community-based interventions based on theory, informed and initiated by community members in improving dietary habits of people and sustaining the change. There is, therefore, an urgent need to develop strategies that both respect the unique cultural perspectives on health while engaging in appropriate primary and secondary prevention necessary to ameliorate risk. These strategies to improve dietary behavior should build on the existing beliefs and attitudes to reduce the risk of cardiovascular disease.
The major strength of the study is the recruitment of Asian Indian participants from the different regions of India given that the cultural, linguistic, and religious attributes of Asian Indians are highly diverse. In addition, the age range of the participants was varied, thus providing a broad perspective relating to the knowledge, attitudes, and beliefs relating to food practices. The participants in the study were community leaders who were educated and holding jobs that were appropriate to their qualifications at the time of the focus group, although some reported to have been under-employed or unemployed previously. In addition, two participants were health professionals who provided their views about their community from a health perspective. Therefore, the sample was able to cover a broad range of Asian Indian migration experiences while capturing the common themes expressed by the participants of both groups. As such, the study was able to gain greater insights into the role that food plays in their lives. The use of focus groups has been found to facilitate groups with a common characteristic in discussing complex issues (16). The provision in advance of the key questions that would be asked during the focus group also provided participants the opportunity to reflect on their responses prior to attending the focus group.
Despite the evidence obtained from this study, the limitations inherent in undertaking such a study need to be acknowledged. The small sample comprising primarily men limits the extent of generalizability of the findings. While the focus group is an effective method at uncovering data, the information may not cover the depth of experiences as well as one-on-one interviews. Furthermore, the level of control the interviewer or moderator has over the course of the discussion is less in focus groups.
---
Conclusion
Food and associated behaviors are an important aspect of the social fabric. Entrenched and inherent knowledge, attitudes, beliefs, and traditions frame individuals' point of reference around food and recommendations for an optimal diet. There are many interconnected factors influencing diet choice that go beyond culture and religion to include migration and acculturation. Interventions to improve dietary choices and thereby influence cardiovascular health will require a socially cohesive approach, which includes families and communities and recognize social determinants of health.
New contribution to the literature 1. Provides insights into the knowledge, attitudes, and beliefs relating to food practices and heart disease in Asian Indian Australians for the first time. 2. Highlights from the participants' perspective, the impact of migration on dietary choice and health outcomes.
---
Conflict of interests and funding
The authors have received funding from the University of Western Sydney, NSW Australia to conduct this study. | 26,589 | 1,328 |
e461a359ef8391ec9860cf2bfce0e4b7ee72501e | Kama Muta: Similar Emotional Responses to Touching Videos Across the United States, Norway, China, Israel, and Portugal | 2,018 | [
"JournalArticle"
] | Ethnographies, histories, and popular culture from many regions around the world suggest that marked moments of love, affection, solidarity, or identification everywhere evoke the same emotion. Based on these observations, we developed the kama muta model, in which we conceptualize what people in English often label being moved as a culturally implemented socialrelational emotion responding to and regulating communal sharing relations. We hypothesize that experiencing or observing sudden intensification of communal sharing relationships universally tends to elicit this positive emotion, which we call kama muta. When sufficiently intense, kama muta is often accompanied by tears, goosebumps or chills, and feelings of warmth in the center of the chest. We tested this model in seven samples from the United States, Norway, China, Israel, and Portugal. Participants watched short heartwarming videos, and after each video reported the degree, if any, to which they were "moved," or a translation of this term, its valence, appraisals, sensations, and communal outcome. We confirmed that in each sample, indicators of increased communal sharing predicted kama muta; tears, goosebumps or chills, and warmth in the chest were associated sensations; and the emotion was experienced as predominantly positive, leading to feeling communal with the characters who evoked it. Keywords communal sharing, cross-cultural, tears, goosebumps, being moved, kama muta An American soldier being reunited with his daughter, Australian men being welcomed by their lion friend in Kenya, a Thai man's doctor canceling his huge bill in gratitude for a kindness years before, a Norwegian singer commemorating the massacre of July 22, 2011. All of these describe |
brief video scenes that have gone viral on social media around the world, touted to "make you cry." It seems that the nationality and identity of protagonists and audiences matter little for evoking this response. Or do they? Certainly the cultural contexts for these emotions are diverse, but are the emotions that emerge essentially the same, even if their cultural significance varies?
We investigated whether individuals from different countries show similar responses to videos like the ones described above. Based on the kama muta model (Fiske, Schubert, & Seibt, 2017;Fiske, Seibt, & Schubert, 2017), we expected similar constellations of emotion terms, sensations, valence, appraisals, and outcomes across cultures. We will briefly summarize the literature, then present the kama muta model, and then report and discuss our studies collecting responses to video stimuli in seven samples from five countries.
---
Being Moved: Phenomenology, Elicitors, and Outcomes
In English, moved or touched or heartwarming seem to be the best descriptors of the emotion typically evoked by such video sequences. In the scientific literatures on emotions, philosophy, and artistic expression and reception, researchers have used various labels that are more or less synonymous: being moved (Cova & Deonna, 2014;Menninghaus et al., 2015), sentimentality (Tan & Frijda, 1999), elevation (Haidt, 2000), kama muta (Fiske, Seibt, & Schubert, 2017), or, in the musical context especially, chills or thrills (Konečni, Wanic, & Brown, 2007). A review of the literature shows some overlapping ideas and observations regarding characteristics of these emotional states.
When sufficiently intense, being moved appears to be characterized by at least three types of bodily sensations: goosebumps, chills, or shivers; moist eyes or even tears; and often a feeling of warmth in the center of the chest (Benedek & Kaernbach, 2011;Scherer & Zentner, 2001;Strick, Bruin, de Ruiter, & de Jonkers, 2015;Wassiliwizky, Wagner, & Jacobsen, 2015). The affective character of this emotional experience appears predominantly positive (Hanich, Wagner, Shah, Jacobsen, & Menninghaus, 2014), although it has been argued by some that the emotion entails coactivation of both positive and negative affect (Deonna, 2011;Menninghaus et al., 2015).
In addition, the motivation of this experience appears to include approach tendencies, such as increased prosocial or communal behavior and strengthened bonds (Schnall & Roper, 2012;Schnall, Roper, & Fessler, 2010;Thomson & Siegel, 2013;Zickfeld, 2015). Elevation is assumed to motivate affiliation with others as well as moral action tendencies (Pohling & Diessner, 2016). Being moved is assumed to lead to a reorganization of one's values and priorities (Cova & Deonna, 2014), to approaching, bonding, helping, as well as promoting social bonds (Menninghaus et al., 2015) and to increased communal devotion (Fiske, Seibt, & Schubert, 2017).
Less consensus has been reached on what exactly evokes such emotional experiences. As the main appraisal pattern, researchers have posited themes of affiliation and social relations, realization of core values, or exceptional realization of shared moral values and virtues (Algoe & Haidt, 2009;Cova & Deonna, 2014;Fiske, Seibt, & Schubert, 2017;Menninghaus et al., 2015;Schnall et al., 2010).
Specifically, the elevation framework (Haidt, 2000; see Thomson & Siegel, 2017, for a review) argues that moving experiences are elicited by observing acts of high moral virtue. Cova and Deonna (2014) have theorized that the emergence of positive core values evokes being moved. Menninghaus and colleagues (2015) proposed that being moved is elicited by significant relationship or critical life events that are especially compatible with prosocial norms or self-ideals. Frijda (1988) characterized sentimentality as evoked by a precise sequence: Attachment concerns are awakened; expectations regarding their nonfulfillment are evoked, and then they are abruptly fulfilled (see also Kuehnast, Wagner, Wassiliwizky, Jacobsen, & Menninghaus, 2014;Tan, 2009). Appraised situations such as these can arouse strong feelings of being moved or touched (Konečni, 2005;Scherer & Zentner, 2001;Sloboda, 1991). These emotion constructs have typically been posited to occur empathically through narratives, theater, movies, or music, rather than resulting from firsthand encounters.
Research assessing moving or touching experiences has been conducted using U.S. American (Schubert, Zickfeld, Seibt, & Fiske, 2016;Thomson & Siegel, 2013), British (Schnall & Roper, 2012;Schnall et al., 2010), French-speaking Swiss (Cova & Deonna, 2014), German (Kuehnast et al., 2014;Menninghaus et al., 2015;Wassiliwizky, Jacobsen, Heinrich, Schneiderbauer, & Menninghaus, 2017), Japanese (Tokaji, 2003), Dutch (Strick et al., 2015), Norwegian (Seibt, Schubert, Zickfeld, & Fiske, 2017), and Finish (Vuoskoski & Eerola, 2017) participants. Yet each of these studies has used different elicitors and different methods, so, to date no study has systematically compared responses to moving stimuli with the same measures across a range of cultures.
---
The Kama Muta Model: Intensified Communal Sharing as a Universal Elicitor
Interviews in many different cultural contexts and languages, as well as ethnographic material from various places and times, suggest that people from a wide range of cultures and times have similar feelings and sensations in a set of situations that is broader than previously assumed, yet sharply demarcated. For example, elevation theory states that elevation is primarily a witnessing emotion (Algoe & Haidt, 2009;Haidt, 2000;Thomson & Siegel, 2017) yet the ethnographic material suggests that in many cultures and times, people report the typical being-moved sensations and motivations when feeling one with a divinity-or with their football team (Fiske, Seibt, & Schubert, 2017).
Furthermore, while some theories stress prosocial norms (Menninghaus et al., 2015), moral beauty (Haidt, 2000), or core values (Cova & Deonna, 2014) as central appraisal themes, interviews and ethnographic material suggest that a person who sees a very cute sleeping infant or one who nostalgically remembers her first love can also feel this emotion. Experiments show that seeing cute kittens and puppies also evokes it (Steinnes, 2017). Rather than any specific deed, the affection itself in the perceiver seems to evoke the feeling in these cases. While some theories stress as central attributes of the emotion the coactivation of sadness and joy (Menninghaus et al., 2015), or the contrast between loss and attachment (Neale, 1986), we have found many reports where there is no apparent negative side-as when a guy who is deeply in love proposes to his girlfriend, and both feel this emotion intensely (the "Proposal" video in the current study had this theme).
Kama muta theory predicts that a sudden intensification of communal sharing evokes this emotion, and that it is universal because the underlying social-relational dynamic is universal. This prediction is based on Relational Models Theory (Fiske, 1991(Fiske, , 1992(Fiske, , 2004b)), which posits four culturally universal relational models to coordinate social life, implemented in culturespecific ways. These models are Communal Sharing (CS), Authority Ranking (AR), Equality Matching (EM), and Market Pricing (MP), which are based, respectively, on equivalence, legitimate hierarchy, even matching, and proportionality.
Individuals in communal sharing relations are motivated to be united and caring. Communal sharing typically underlies close relations among kin, in families, between lovers, and in closeknit teams, but is also used to construct larger and more abstract social groups and identities. Individuals in a communal sharing relation focus on what they have in common, and sense that they share some important essence such as "blood," "genes," national essence, or humanness. Communal sharing is communicated by and recognized from behavior that connects bodies or makes bodies equivalent and thus indexes the sharing of substance: touch, commensalism or feeding, synchronous rhythmic movement, exchange of bodily fluids, transmission of body warmth, and body modification (summarized as consubstantial assimilation by Fiske, 2004b). Communal sharing is also recognized from behavior that responds to the needs of the relational partner without expecting to be repaid, even among strangers.
Relational models theory thus has a broad yet precisely characterized notion of communal sharing relationships with different types of entities, such as humans, animals, deities, music, or nature. Communal sharing is operating when people perceive themselves as, in some significant respect, essentially the same as these other entities, often because they have a strong experience of consubstantial assimilation, as in celebrating the Eucharist. Communal sharing relationships can be stable or transient, and perceived by both sides or not. We infer them from acts of kindness and of consubstantial assimilation. This wide range of circumstances fits the wide range of constellations where we found evidence of kama muta experiences.
The universal importance of communal sharing makes it likely that there is a positive emotion signaling the event of a communal sharing relation suddenly intensifying (Fiske, 2002(Fiske, , 2010;;Frijda, 1988). We posit that this is the emotion that people often call being moved. In a number of languages, labels for this emotion use similar metaphors of passive touch or passive movement (or stirring), or warmth in the chest or heart. In Mandarin, you might say you feel gǎn dòng, 感动; in Hebrew, noge'a lalev, ; in Portuguese, comovido/a; and in Norwegian, rørt. This emotion leads in turn to an increase in communal feelings toward those who evoked the emotion. Individuals make sense of and share this emotion through culture-specific concepts and practices (Barrett, 2014;Wierzbicka, 1999).
English speakers sometimes use moved or touched for other experiences than the ones we denote as kama muta; conversely, they may denote kama muta with other terms (e.g., nostalgia, rapture, tenderness). Also, communal sharing intensifications may sometimes go unrecognized and unlabeled, yet still evoke the same motives. However, we have found that in many languages, there exist one or more words that are typically used for the emotion evoked by sudden intensifications of communal sharing. For scientific purposes, we cannot rely on imprecise and inconsistently used vernacular words from living languages. To give this construct a precise, consistent scientific definition, we name it with a lexeme from a dead language: kama muta (Sanskrit, literally meaning "moved by love"), which may or may not closely correspond to one or more emotion terms in any given language.
---
Kama Muta as a Universal Emotion
We predict that universally, a kama muta response is elicited by a sudden intensification of communal sharing, and that the emotion in turn makes persons affectively devoted and morally committed to communal sharing with those who evoked the emotion in them, and to a lesser degree with some others. In English, communal sharing relationships are typically labeled and reported as closeness (Aron, Aron, & Smollan, 1992). For Norway and the United States, we found indeed that an appraisal of increased closeness was related to being moved (Schubert et al., 2016;Seibt et al., 2017). However, no evidence has been presented yet on the universality claim, nor on the proposition that kama muta leads to feeling close and communal with the person who evoked it.
As explained above, communal sharing is recognized from acts of consubstantial assimilation, or from acts of great care. Consubstantial assimilation, in turn, encompasses hugs, reunions, wishing or imagining another near, kissing, holding hands, sharing food, or dancing or singing in synchrony. Acts of great care are characterized by attending to the needs of another, which can range from simple kindness to heroic sacrifice. Both should lead to perceived closeness. In addition, when experienced between an individual and a group, consubstantial assimilation should be perceived as inclusion, while acts of great care should be perceived as moral acts. Both should make the perceived actor seem particularly human. In both cases, overcoming obstacles on the way to closeness evokes suspense that should increase the perceived suddenness of communal sharing intensification.
To start examining the claim that kama muta is universally generated by sudden intensification of communal sharing, we sampled from cultures in different regions of the world. These cultures differ in emotional expressivity, as well as in some factors potentially related to it (some sorts of individualism and collectivism, gender equality, and historic heterogeneity; Matsumoto & Seung Hee Yoo Fontaine, 2008;Rychlowska et al., 2015). In addition, we were especially interested in comparing Western and East Asian cultures, as these have been found to differ markedly in the configuration and dynamics of facial emotional expression (Jack, Garrod, Yu, Caldara, & Schyns, 2012). We build on two prior studies that evoked kama muta through autobiographic memories and through a video (along with other videos eliciting other emotions) in Norway and the United States and measured five appraisals (Seibt et al., 2017). The research question is whether people in a wider range of cultures experience kama muta and whether these experiences are predicted by measures indicating intensified communal sharing.
---
Overview of the Current Studies
We conducted studies in the United States, Norway, China, Israel, and Portugal. An overview of the different samples including information on their demographics, sample location, and number of stimuli is provided in Table 1. Apart from being conducted in different languages, the procedures, stimuli, and materials were mostly identical but differed on some occasions as highlighted below. We identified a set of labels for the kama muta experience in each of the five languages.
We presented the same set of four videos in all five countries, along with additional videos that were chosen to fit the culture where the study was run, to have both overlap and variety (we also included one comic to increase stimulus variability). We used video stimuli because they had been shown to evoke the emotion in many participants in the United States and Norway (Seibt et al., 2017). We selected them based on a search for keywords such as "moving" or "heartwarming" in various languages, and based on having similar length (90-180 s).
Based on the universality claim of kama muta theory, we hypothesized that across all five countries we would detect kama muta experiences as a co-occurrence of using kama muta labels to describe the experience, reporting typical sensations, a positive experience, and feeling communal toward the protagonist as an outcome. We further expected that participants across all five nations would experience kama muta when communal sharing relations suddenly intensify. Specifically, the intensity of kama muta as indicated using the labels identified should be predicted (Hypothesis 1) by the judged positivity of the feeling, more than by its negativity in all five countries, and (Hypothesis 2) by the sensations of tears, a warm feeling in the chest, and chills/ goosebumps in all five countries. We further predicted (Hypothesis 3) that the intensity of kama muta relates to feeling unity and closeness with the protagonist in the video in all five countries. Based on kama muta theory's claim on the central appraisal pattern, we hypothesized that the intensity of kama muta would be predicted (Hypothesis 4) by the appraisal of increased closeness among protagonists in all five countries.
All studies presented here were examined and approved by the Internal Review Boards of the respective institutions at which they were performed. For all studies, participants were presented with written information about study procedures, and the contact information of the principal investigator. By proceeding with the study, participants indicated their consent.
---
Studies 1-7
---
Method
Participants. In total, 671 participants were recruited through various means at five different sites: the United States, Norway, China, Portugal, and Israel. An overview of the study details is presented in Exclusion was based on cases where the screen was displayed shorter than the actual length of the video (with a buffer of 10 s), or for longer than 10 times its length (this allowed for long loading times).
b Some measures not relevant to the present hypotheses were presented in English.
c
In contrast to the other countries stimuli were not presented in random order.
Table 1, and descriptive statistics for the respective samples are provided in Supplementary Tables S1 andS2. Participants were excluded based on the duration of video presentation (see Table 1). In the Chinese sample, four cases were excluded because of a computer error. Two participants were excluded because they were younger than 18. The final dataset consisted of 624 participants (407 females, 178 males, 39 unspecified gender) ranging from 18 to 74 years of age (M = 29.90, SD = 11.71). With a few exceptions in Norway and Portugal, items were completed in the languages of the respective countries; hence, language is ignored as a factor. We drew two samples each from the United States and Norway, because we introduced a few changes after running the first wave in these two countries (see below) and decided to re-run the study in these countries with new stimulus sets and the changes in place, to broaden our evidence base. Nevertheless, the changes were small enough to justify including both samples in the final analysis.
Overview and design. The topic of the studies was introduced as emotional reactions and media.
After giving informed consent, participants were told that they were going to watch a number of videos. In most samples, participants were required to watch two videos and invited to continue watching (up to 10). In the Chinese sample, participants were instructed to complete all seven. Stimuli were presented in random order except for the Chinese sample.
Materials. A total of 26 videos and one comic strip were utilized across all samples. An overview of the allocation and a summary of all stimuli are provided in the Supplementary Material (Table S2). We used one set of 10 videos in both the U.S. I and Norway I samples, and a different set in the U.S. II and Norway II samples. We showed three unique videos in China and two in Portugal.
All other videos overlapped among the different samples, and four videos were shown in all five countries. Following each video clip, participants were presented with the questions "How moved were you by the video?" and "How touched were you by the video?" on 5-point scales anchored at not at all and very much. See Table 1 for the respective translations. In the Portuguese sample, only one item was used, while the Israeli version included an additional item asking about "How stirred were you by the video?"
Valence was assessed by two items: "How positive [negative] is the feeling elicited by the film?" 1 on the same 5-point scale. For bodily experiences, we asked, "What bodily reactions did the film elicit in you? Mark all the bodily reactions that you were or are still experiencing." Participants answered items on goosebumps, chills, moist eyes, crying, tight throat, and a warm feeling in the chest, along with some filler items, on 5-point scales anchored at not at all and very much. In the first U.S. and Norwegian samples, these sensation items were rated on dichotomous scales and there was no item for crying.
Five appraisals were assessed in all studies: "One or several of the characters did something that was morally or ethically very right" (moral), "All or some characters in the movie felt closer to each other at the end (compared with at the beginning)" (closeness), "Somebody who was excluded at first was included at the end" (inclusion), "All or some of the characters overcame big obstacles during the events" (obstacles), and "All or some of the characters became somehow more human during the events" (human). These were rated on 5-point scales ranging from not at all to to a high degree. Afterward we assessed, among some additional responses to the video clips, feelings of closeness to the main character(s) of the video clips and how much unity the video clip elicited on 5-point scales ranging from not at all to to a high degree.
---
Results
According to our hypotheses, the intensity of kama muta should be predicted in all five countries by (H1) the judged positivity of the feeling, more than by its negativity; (H2) the sensations of tears, a warm feeling in the chest, and chills/goosebumps; (H3) feeling unity and closeness with the protagonist; and (H4) the appraisal of increased closeness among protagonists. We tested each of these hypotheses in separate multilevel models for each sample, regressing a kama muta index on these various predictors. We then combined the samples meta-analytically.
General modeling strategy. We tested our hypotheses with multilevel regression procedures (lme4 in R). Participant and video were added as random factors. Intercepts were allowed to vary randomly according to both participant and video to model different levels of the dependent variable for the different videos and participants (Judd, Westfall, & Kenny, 2012). For each sample, the unstandardized regression coefficients were standardized and employed as an estimate of effect size r (Bowman, 2012). The seven effect sizes were meta-analyzed utilizing the metafor package (Viechtbauer, 2010) in R. For each relation, a random effects model was fitted using a restricted maximum likelihood procedure (REML). Effect sizes were tested for differences across samples.
Throughout this article, we report standardized effect sizes (r) and their correspondent 95% confidence intervals in brackets [a, b]. We do not present p values for the hypothesized effects because their significance can be easily inferred from the confidence intervals. Detailed information on differences across samples, videos, or gender of the participants is presented in the Supplementary Material.
Index of being moved. To evaluate whether ratings of being moved and being touched, or their translation in other samples, could be combined into a common index, we estimated an unconditional three-level hierarchical model in HLM (Hierarchical Linear Modeling Software) for each separate sample (Nezlek, 2016). Reliabilities at Level 1 were sufficient, ranging from .90 to .96 (see Supplement for details). Therefore, ratings of being moved and touched were averaged into the main dependent variable (hereafter, "moved") of the study after subtracting 1 so that the variable ranged from 0 to 4. For the Israeli study, three items were combined, whereas the Portuguese study included only one item, which was utilized as the main dependent variable.
Valence of being moved. To assess whether kama muta is experienced as a positive feeling (Hypothesis 1), we regressed being moved on ratings of how positive and negative the feeling was for each sample separately. The interaction of positivity and negativity was not significant in any sample and, therefore, dropped for the final model. The final random effects model indicated an overall effect size estimate of r = .59 [.53, .65] for positivity on being moved (Figure 1). The overall effect size of negativity on being moved was significantly smaller, r = .16 [.08, .23] (Figure 2). Effect sizes differed significantly for positivity, Q(6) = 31.82, p < .001, I 2 = 82. 46 [56.91, 96.69], as well as negativity, Q(6) = 25.92, p < .001, I 2 = 75. 75 [40.79, 94.89], across samples.
Sensations. To test Hypothesis 2, we combined items on goosebumps and chills into a chills score, while ratings on moist eyes, crying, and a tight throat were combined into a tear score. Being moved and touched was regressed on the chills score, on the tear score, as well as on the item on warmth in the chest, without interactions, in three separate models for each sample. The overall effect size of crying on being moved was r = .54 [.46, .63] (Figure 3), followed by warmth, r = .41 [.31, .50] (Figure 4), and finally chills, r = .31 [.25, .37] (Figure 5). Effect sizes for crying differed for the different samples, Q(6) = 107.68, p < .001, I 2 = 90. 55 [77.08, 97.88]. The same held true for warmth, Q(6) = 50.35, p < .001, I 2 = 89. 78 [74.60, 97.95], and for chills, Q(6) = 19.08, p = .004, I 2 = 66. 27 [19.91, 92.17].
Communal outcome. Items on experiencing unity and closeness with the protagonists of the videos were combined into a communal outcome index. For each sample, being moved was regressed on communal outcome. The overall effect size of communal outcome was r = .59 [.51, .66], supporting Hypothesis 3 (Figure 6). Effect sizes differed for the different samples, Q(6) = 50.91, p < .001, I 2 = 89. 19 [73.35, 97.86].
Appraisals. To test our fourth hypothesis, in a first model, we regressed being moved on the closeness item. The overall effect size was r = .29 [.22, .37] (Figure 7), with effect sizes differing across samples, Q(6) = 24.46, p < .001, I 2 = 77. 24 [43.50, 95.56].
In a second model, being moved was regressed on all five appraisal items. In this joint model, being moved was predicted by increased closeness, r = .12 [.09, .16], perceiving actions as morally right, r = .21 [.17, .25], perceiving someone becoming more human, r = .19 [.11, .27], and perceiving that obstacles were overcome, r = .08 [.04, .13]. Inclusion had no overall effect r = .01 [-.02, .05]. Effect sizes did not differ significantly across samples, except for becoming more human.
---
General Discussion
In seven samples from five countries in East Asia, the Middle East, North America, and Northern and Southern Europe, we measured responses to videos. We used a total of 26 videos, and measured the amount of kama muta evoked using appropriate terms translating moved and touched in five languages. In addition, we assessed the valence of the experience, a set of sensations, appraisals, and communal outcomes. As predicted, in each sample, we found that the kama muta index was related to experiencing the emotion as positive when controlling for negativity, and, to a much smaller extent, also as negative when controlling for positivity. Kama muta covaried most strongly with tears, then with a feeling of warmth in the chest, and least strongly with chills or goosebumps. The kama muta index was predicted by judged increases of closeness among the characters in the video and by three other appraisals. It was related with feeling unity and closeness with the characters.
We focused in the current study on identifying kama muta across cultures, rather than on explaining differences among cultures. In discussing our results, we will thus focus on the overall picture. We briefly discuss the cultural heterogeneity again in the section on limitations at the end. While there was significant variation in all effects across samples, the effects were positive and significant in each sample individually. The kama muta model derives a universal emotion with many names from a universal relational model (Fiske, 1991;Fiske, Schubert, & Seibt, 2017;Fiske, Seibt, & Schubert, 2017). Other models of being moved do not discuss the question of cultural differences or similarities regarding this emotion, nor do other models address the issue of the differences in meaning of vernacular lexemes in different languages (Wierzbicka, 1999). Our cultural comparisons revealed similar appraisals, sensations, valence, and outcomes of kama muta across the five countries. This lends support to the prediction that kama muta is a universal emotion; regardless of whether and how it is labeled in vernacular usage.
---
Valence
Two aspects are noteworthy about our findings regarding valence: The first is the strong and consistent characterization of kama muta as a positive feeling across all samples. The second is the value in assessing positivity and negativity separately. Across all samples, we found that greater negativity predicted greater kama muta when its shared variance with positivity was controlled for. However, this effect was much smaller than the one for positivity. We would not have found this pattern if we had assessed valence on only one dimension.
It is possible that the instances where negativity contributed to being moved were, in fact, not kama muta experiences, but resulted from a broader usage of the terms we used to assess kama muta. It is also possible that some negativity prior to the eliciting event increased kama muta (Fiske, Seibt, & Schubert, 2017). Supporting this reasoning, Schubert et al. (2016) found that when removing the linear and quadratic trends, ratings of sadness had no cross-correlation with ratings of being moved for a continuous measure of both along watching videos like the ones shown in the present study. Finally, the valence of the feeling may be complex for some people watching some videos. The larger picture is, however, that kama muta is predominantly a positive emotion, elicited by a positive appraisal. Our valence results fit several being-moved models that predict being moved to be a predominantly positive emotion (Cova & Deonna, 2014;Hanich et al., 2014;Kuehnast et al., 2014;Tokaji, 2003), yet are at odds with others that see it as predominantly negative (Neale, 1986).
---
Sensations
Across five different regions, languages, and cultures, we found the same three sensations to be predictive of kama muta. This supports our model of kama muta as a universal emotion with coordinated changes across several systems, resulting in an experience consisting of several components. We measured tears with a combination of moist eyes, crying, and tight throat; and chills as a combination of chills and goosebumps. Overall, tears were most strongly correlated with being moved. This, along with the fact that being moved was characterized as a predominantly positive feeling, suggests that kama muta weeping is different from sadness weeping. This is no consensus in the literature on crying, and several authors make an argument that negative components in the being-moved experience such as helplessness provoke the tears (Miceli & Castelfranchi, 2003;Vingerhoets & Bylsma, 2015). However, the present data do not support that argument.
A feeling of warmth in the chest was the second sensation. At this point, it is unclear what causes this sensation, possibly changes in cardiac activity, vagal tone (Keltner, 2009), or feedback from them. This feeling may be related to a gesture we often observe when people are strongly moved: placing one or both hands over the center of the chest (something that people are not always aware of doing). Chills and goosebumps were the third sensation related to kama muta. Although these skin sensations also occur in fear responses and when having uncanny experiences (and when exposed to low ambient temperature), their combination with tears, warm feelings in the chest, and positivity seems to be specific to kama muta (cf. Seibt et al., 2017).
---
Appraisals
The main appraisal we tested was one of increased closeness, an operationalization of our construct of a sudden intensification of communal sharing. As predicted, viewers' appraising characters as becoming closer significantly predicted increases in kama muta. In addition, increased closeness remained a significant predictor after controlling for appraisals of morality, becoming more human, inclusion, and overcoming obstacles.
When testing all appraisals, morality, increased closeness, becoming more human, and overcoming obstacles each predicted kama muta. How do people judge morality? Acting morally is doing the right thing, and what is the right thing depends on which relational model is applied (Rai & Fiske, 2011): Acts are seen as moral when they fulfill the ideals of the expected relational model and as immoral when the relational model is violated. We believe that the morality appraisal is best understood in this way: Somebody was seen as acting morally because she or he fulfilled the ideals which underlie communal sharing relationships such as compassion, responsiveness to needs, kindness, generosity, and inclusiveness. Communal sharing consists in needbased sharing and consubstantial assimilation: Where one is, people expect the other. However, many individual acts are primarily one or the other: Either the act consists in saving someone, helping and protecting them, or it consists in touching, hugging, kissing, approaching, and synchronizing one's movements to the other. So people observing acts of need-based giving may infer closeness but they are most likely to focus first and foremost on the need-based giving, which is best captured by the morality appraisal. However, morality is not a very sharply defined construct as a folk concept or as a scientific concept (Haste, 1993); so future studies will need to corroborate this interpretation by asking more specific questions.
Seeing someone as becoming more human implies that someone can be more or less human (Haslam, 2006). Whereas the dehumanization and infrahumanization constructs have generally been studied as perceptions of groups, here, we assessed humanness judgments about individual characters. Given that this judgment is rather remote from the actions depicted in the videos, it is unclear whether it leads up to the emotion or is a consequence of it. Even though we call them appraisals, we do not believe these judgments, as such, directly cause the emotion. Rather, we believe these judgments of humanization indicate the perception of an intensification of communal sharing that causes being moved. Perceptions of humanness may contribute to kama muta because they indicate that the characters are seen as relatable and sympathetic, or because they indicate that the characters are seen as sharing something essential in common with the participant (Haslam, 2006;Kteily, Bruneau, Waytz, & Cotterill, 2015;Leyens et al., 2000). Sharing a common essence, in turn, is the core of how we represent communal sharing relationships (Fiske, 2004a). Thus, the findings for the humanness appraisal can be explained by the kama muta model, but they are not a test of the model.
Our results lend cross-cultural empirical support to theoretical analyses seeing being moved as evoked by communal feelings or acts: solidarity, a communion of souls, a generous act, or reconciliation (Claparède, 1930); fulfillment of the phantasy of union (Neale, 1986); resolution of attachment concerns (Frijda, 1988); love/acceptance (Panksepp, 1995); reunification (Tan & Frijda, 1999); love, forgiveness, sacrifice, and generosity (Konečni, 2005); prosocial acts or reconciliatory moments (Hanich et al., 2014). Yet many of these models mention not one, but several alternative elicitors, not only the communal ones listed here but also others. The kama muta model traces all kama muta back to a common core: the sudden intensification of communal sharing.
Perhaps the most similar theory to ours is the elevation model, which assumes that an act of generosity, charity, gratitude, fidelity, or any strong display of virtue evokes elevation (Algoe & Haidt, 2009). The difference from the kama muta model is best illustrated with an example. As we know from another study (Schubert et al., 2016), the peak of the kama muta experience in the lion video (one of the four videos presented in all five countries) occurs when a lion that had been saved and raised by two young men, and then released in Africa, later recognizes them in the wild, runs toward them, and hugs them repeatedly. We think this act exemplifies communal sharing by showing closeness through a joyous reunion with hugging, laughing, and relief, rather than a virtuous act by the lion or by the men at that moment. People around the world understand this gesture, without words, and react to it emotionally, often with tears, a warm chest, or goosebumps.
In sum, the kama muta model seems to most parsimoniously explain the three appraisals that best predicted being moved across the five cultures. Our model is based in relational models theory, which integrates judgments of morality; acts of touching and other signs of closeness; social identity; humanness; and many other constructs into a common concept, communal sharing-the feeling of equivalence. This led to our theory that the many situations that people are likely to identify as moving, rørt, comovido/a, , or gǎn dòng (感动) all have something in common, the sudden intensification of communal sharing. This social-relational transition universally elicits the same emotion, kama muta, involving the same physiological sensations and motives. Its cultural significance may vary considerably, but we did not investigate the meanings of kama muta in these five countries.
---
Limitations
Although the current study focused on intensification of communal sharing, the kama muta model predicts that it is sudden intensifications that evoke kama muta. We assessed this aspect with overcoming obstacles, but the model defines suddenness as abrupt increase in communal sharing, or salience of communal sharing against a prior or default background of loss, separation, or concern about togetherness. This background can be an obstacle, but it can also be contrary expectations, norms, apprehensions or a reality, against which the foreground of a communal sharing act, event, fulfillment, or fantasy is contrasted (see also Frijda, 1988). The theory that a suddenness/sharp contrast is essential still awaits empirical verification, either by developing good measures, or by manipulating it experimentally. Across all five languages, people sometimes use the terms we used to assess kama muta to denote other "nearby" emotions or feelings, such as sadness or awe. This is not an insurmountable methodological problem for us, because along with labeling, we look for convergent evidence from appraisals, sensations, and valence to classify an episode as an instance of kama muta. It would be a problem for our model or methods; however, if increased closeness was not perceived in most instances of being moved, rørt, comovido/a, , or gǎn dòng (感动), because we assume that the vernacular labels for kama muta in these languages do approximately coincide with the kama muta construct.
The concepts of equivalence and bias have been put forward with regard to cross-cultural assessment and interpretation (van de Vijver & Tanzer, 2004). In our studies, we observed not only similarities but also considerable variation among the samples, both within and across cultures. This variation or bias may have many sources: the use of different video material, which was confounded with study sample (method bias); differences in the meanings of questions due to cultural and language variations (item bias); differences in sample characteristics like age and socioeconomic status (SES); and of course also differences in kama muta prototypes, precedents, paradigms, precepts, and proscriptions across languages and cultures (construct bias). Due to methodological restrictions, we cannot infer equivalence or measurement invariance from the present data, because we assessed most of our constructs with one or two items.
Our results show that intensifications of communal sharing are universally recognized and evoke a quite similar emotional response, a construct which we denote kama muta. This is a basis for cultural understanding: Even people lost in translation can recognize communal sharing when they see it, and in this way, figure out important relational building blocks in cultures other than their home cultures. Studies like the present one help to make this implicit relational cognition explicit, and can thereby help people navigate their increasingly multicultural societies and understand each other by recognizing something they all have in common, the kama muta emotion-whatever particular meanings they endow it with.
---
Authors' Note
Ravit Nussinson has previously published under the name Ravit Levy-Sadot. Thomas W. Schubert led the design of the studies, Beate Seibt wrote the first draft of the article, and Janis H. Zickfeld conducted the main analyses. All authors were involved in the translation, in data collection, and in revising the article. We thank the kamamutalab.org for helpful feedback and discussions.
---
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
---
Note
1. We provide the original questions for all languages in the supplemental material. Here, we use the English translations, knowing that the terms have different extensions, connotations, prototypes, and context-dependent meanings, reducing direct comparability across languages. | 41,358 | 1,745 |
db9b299d2520d2e1b93d6930f41b1fcb12329f16 | COMMODIFICATION OF HUMAN EMOTION AND MARRIAGE IN CAPITALISM (A READING OF PROBST’S THE MARRIAGE BARGAIN FROM A MARXIST PERSPECTIVE) | 2,023 | [
"JournalArticle"
] | Probst's The Marriage Bargain from a Marxist perspective to examine how social institutions like marriage and human emotions are commodified in the era of late capitalism. The novel deals with the theme of love, romance and marriage, but also highlights the commodification of marriage in a capitalist society. The two main characters, Alexandria and Nicholas, use marriage as a means to solve their economic problems, challenging the traditional notion of marriage as a bonding of two souls. This research aims to explore this phenomenon. This qualitative study employs a Marxist literary analysis of the novel, focusing on the commodification of marriage and human emotions in the late capitalist society. It examines how the novel's characters challenge the traditional concept of marriage and use it as a commodity to satisfy their material needs. The core finding of this research is that under the harsh economic pressures of late capitalism, human emotions and social institutions like marriage are commodified, and people compromise their ideals for economic gain. The novel shows how marriage is used as a commodity to solve economic problems, and how the traditional concept of marriage is being challenged by the utilitarian values of modern societies. The research concludes that The Marriage Bargain is an illustrative example of the commodification of marriage and human emotions in late capitalism. The exploration of this discourse clarifies how the institution of marriage is being used as a commodity to satisfy material needs. The novel raises an uncommon issue regarding the marital relationship, and the utilitarian attitude of its characters towards their own marriage represents the emerging social problem of the reification of human relationships. | Introduction
This study investigates the escalating commodification of social institutions, such as marriage and human emotions, within the contemporary capitalist society. The Marriage Bargain, a literary work by Jennifer Probst, serves as a poignant representation of this stark reality concerning the evolving characteristics of social institutions and human emotions in a late capitalist society. The research is specifically centered on a Marxist analysis of The Marriage Bargain, which not only addresses the overarching themes of love, romance, and marriage, but also emphasizes the commodification of marriage. Despite the apparent focus on love and romance, the novel provides ample room for a Marxist literary examination.
Commodification is the process of objectifying human emotions, and is often associated with reification. In Jennifer Probst's novel, The Marriage Bargain, the characters Alexandria Maria McKenzie and Nicholas Ryan utilize marriage as a means to solve their financial difficulties, thereby challenging the traditional notion of marriage as a bond between two individuals. By marrying for practical reasons, such as to save a family home or inherit a corporation, the characters view marriage as a commodity rather than a union of emotional and spiritual connection. Marriage, as a social institution, is typically viewed as a compromise between two individuals of opposite sexes, with love, care, and support being key components. However, conflicts and misunderstandings between partners can sometimes lead to divorce. Jennifer Probst's novel, The Marriage Bargain, sheds light on the challenges faced by traditional marriage in modern societies. The utilitarian values of the characters in the story challenge the conventional concept of marriage, exposing the emerging social problem of reification of human relationships. Late capitalism's economic power replaces the traditional human relations, kinships, and marriage with a definitive term. This process highlights the shifting notions of commodification, where abstract human emotions and norms are traded for objective monetary values. This trend results in the commodification of human emotions, reducing human relations to use value in which they are used and exchanged with materials.
In Jennifer Probst's novel, The Marriage Bargain, the protagonists Alexandria Maria
McKenzie and Nicholas Ryan enter into a contract that stipulates the terms and conditions of their living arrangement as if they were married. This contractual marriage serves as a means for Nicholas to inherit his uncle's assets. This study aims to examine how the social institution of marriage is being commodified to fulfill one's materialistic desires.
Furthermore, the investigation seeks to shed light on the underlying reasons for challenging the conventional concept of marriage and treating human relationships as a commodity.
Jennifer Probst is a prominent lesbian novelist who offers a nuanced portrayal of the complex and transitional society of late twentieth-century America. Probst artfully interweaves her personal experiences with social issues, featuring characters from diverse socioeconomic backgrounds. Despite the broad range of topics explored in her works, Probst maintains a stylistic simplicity that blends idealism and realism, rendering her works accessible to a wide readership. Probst's literary contributions in exploring introspective themes are unparalleled, offering a positive and transformative impact on readers grappling with personal struggles and confusion. Her exceptional contributions have earned her a place among the distinguished authors of the modern era, with her unique perspective and style emerging as a singular voice in contemporary postmodernist literature.
---
Research Objectives
To analyze the commodification of marriage and human emotions in Jennifer Probst's The Marriage Bargain from a Marxist perspective.
To examine how the utilitarian values of the novel's characters challenge the traditional concept of marriage and use it as a commodity to satisfy their material needs. To explore the underlying reasons for the reification of human relationships in late capitalism and how the commodification of marriage serves as an illustrative example of this phenomenon.
---
Review of Related Literature
Several scholars have scrutinized Jennifer Probst's literary works from various perspectives. Vailas (2004) has shed light on how Probst's writing has effectively disseminated liberal ideals in society. The liberal values that Probst espouses in The Marriage Bargain, however, exhibit certain elements of cynicism. Vailas remarks as follows:
On one level, The Marriage Bargain deals with the idea of a young generation of New York who have become enamored with deviant passions which leave them uprooted from the established ideals and norms of society. But, on a more intimate level, it examines the idea of happiness and whether or not man (or woman) is destined to ever find contentment. ( 12)
According to Vailas, The Marriage Bargain portrays certain principles that can guide an individual's life in disarray. However, Vailas' perspective towards marriage appears to be one of cynicism, as her characters are portrayed as either unfaithful or marrying for financial benefits. This pessimistic outlook towards life leads to abstract philosophical musings. Smithson (1979) provides a succinct assessment of Jennifer Probst's notable literary works, delineating a gradual shift in tonality and thematic content. Smithson's perspective is presented below:
Probst's novels bear the mark of imaginary or uncertain crimes. This can be called an innovation in the field of commercial literature. In The Marriage Bargain, the billionaire Nicholas Ryan is interested in nominal marriage. He is hopeful that this contract marriage is likely to land him in favorable condition. But the result turns out to be unexpected. At last Nicholas is filled with worry and anxiety. Nick's case of contract marriage is quite different. (17) Smithson asserts that Jennifer Probst's representative works contain innovative elements. Although the subject matter of The Marriage Bargain may be unpalatable and shocking to many readers, it is still relatively new. Probst masterfully dramatizes the societal pressure to conceal one's inner desires and personality in her novels. Tammy (2001) approaches The Marriage Bargain by examining the sequence of events within the novel. She believes that the plot's development is the most captivating aspect of the novel. With this in mind, Tammy comments as follows: The eventual denouement of the narrative is relatively disappointing. Alex is forced to choose between the naturalness of human emotions and pressures of shifting economic condition. Deciding she has already lost the battle, Alex gives up trying to run his own bookstore and accepts the offer of Nicholas. The moral dimension of the contract between Alex and Nicholas is noticeably striking. It deserves attention and analysis. (17) In her analysis, Tammy challenges the ordering principle of events in The Marriage Bargain, which she finds to be complicated and frequently decisive, leaving the reader unsure.
Despite this, Tammy notes that the novel opens up new possibilities even in moments of disappointment and frustration. The characters Alex and Nicholas remain calm and composed in the face of unfavorable situations. Sander (2001) evaluates Jennifer Probst's novel The Marriage Bargain based on her ability to create new terms and neologisms to convey her original ideas. Sander believes that Probst's work promotes the emergence of a new concept of individual freedom and a demand for more space for creative expression. Probst introduces a new type of interpersonal relationship, which she describes using a newly coined term and neologism. This relationship requires understanding and familiarity between married partners to create a greater level of creative expression. Such a relationship can occur between a married woman and her unconventional lover turned fiancé. Probst also uses phrasal expressions and poetic neologisms throughout the novel to convey different implications. These expressions contain the ethos of Victorian protocol, indicating the influence of Victorian mentality on the language used by sober-minded individuals of that era. Bernard (2005) examines the theme of double consciousness in Jennifer Probst's The Marriage Bargain, particularly in relation to women who are aware of their growing passion.
He suggests that Probst's text does not lend itself well to feminist analysis due to the author's excessive sobriety. Bernard notes that the character of Alex is intelligent and educated, but also immature and irrational in her decision to marry Nicholas due to financial pressures. He sees a similarity between the author's life and the protagonist's life, and suggests that extreme feminist consciousness harms the character's conscience. The novel is an example of popular fiction with naturalistic fervor and seems to showcase Probst's intellectual prowess through the character of Alexa. Wade (1999) regards The Marriage Bargain as a work that possesses both subtle and straightforward characteristics. The primary issue that Jennifer Probst addresses in the novel The Marriage Bargain is something of a subtle text in which Alexa is caught between the deterministic forces and consciecne. Among Americans of the twenty first century, and especially among the young, morality and sex are interchangeable terms. Frequently the judgment of right and wrong behavior rests almost exclusively on sexual behavior. Evil is identified with sex: there the devil wields his greatest powers. the relaxed social and sexual rituals of his time occupies the forefront of the novel. ( 27)
Wade argues that Alexa struggles to uphold her moral principles when faced with practical challenges. The novel focuses on the conflict between commercial and ethical values. Probst's uncertainty is evident when she attends a party and is torn between asserting her independence and seeking her father's approval. By highlighting the negative impact of limited financial opportunities, Probst implies the influence of determinism. Macey (1992) sees both optimism and pessimism in The Marriage Bargain. The novel portrays the pessimistic condition arising from the growing poverty of the working class.
However, Macey believes that there is also a ray of hope in this pessimistic world. He argues that the novel shows that even in the midst of corruption and sickness, there are yearnings and inarticulate strivings for a better world and a life with more dignity. Macey praises the novel's portrayal of financial liberation and pragmatic choices, but he criticizes the lack of reflection on the decency and dignity of human ambition. Knopf (2003) praises Jennifer Probst's personal style in The Marriage Bargain and her ability to convey emotion without being overly sentimental. He notes that the novel combines introspection with stream-of-consciousness techniques and that the narrator's sarcasm and peculiarities contribute to its success. Knopf emphasizes the author's wit and flair, and describes the novel as a matchless piece of art.
---
Previous studies on
---
Research Methodology
The research methodology employed in this study utilizes the theory of Marxism to examine the issue of proletarians as others. Marxism, which has had a significant influence among workers and intellectuals in capitalist countries, has been utilized by non-Marxist intellectuals, particularly sociologists and historians in Western countries. Many liberation groups in Third World nations now clearly understand the character of their opponent thanks to Marxism, which has been adjusted to deal with the particular combination of primitive and sophisticated capitalist circumstances.
The researcher adopts Marx's dialectical approach, which views actual changes in history as the outcome of opposing tendencies or contradictions. Marx's materialism is also used to analyze the interaction between social conditions and behavior, and people's ideas.
Marx's theory of alienation in the labor system is based on four relations, which are investigated. These relations include the worker's alienation from productive activity, the product of that activity, other human beings, and the distinctive potential for creativity and community.
Marxist critics' application of the perspective of Marxism in interpreting literary texts is also examined. The study emphasizes the significance of literature in supporting capitalist ideology since it is consumed mostly by the middle classes. Writers who sympathize with the working classes and their struggle are regarded favorably by Marxist critics. On the other hand, writers who support the ideology of the dominant classes are condemned. The research draws upon the insights of various Marxist theorists, whose interpretations may differ in breadth and sympathy.
---
Analysis of Probst's The Marriage Bargain
This study employs a critical analysis to investigate the commodification of human Ryan eagerly anticipates being included in his uncle's will as a beneficiary of his vast properties and wealth. This text describes the story of Nicholas Ryan and his uncle Earl, who is the head of a corporate house with significant wealth and properties. Nicholas is hoping to become his uncle's legitimate heir and inherit his estate, but his uncle puts a condition on his will that requires Nicholas to get married and live with his wife for at least one year before he can inherit everything. Unfortunately, Nicholas has a history of frequently changing romantic relationships, and his uncle is doubtful that he can have a stable family life. As a result, Nicholas seeks a marriage that is purely transactional, and he looks for a woman who will act as his wife for just one year, with the sole purpose of fulfilling the conditions of his uncle's will.
The present study employs the theoretical perspective of Marxism as its primary theoretical framework. The research methodology is based on this approach. The theory of Marxism, as developed by prominent theorists such as Marx, Lukacs, and Adorno, is referenced throughout the analysis. The researcher asserts that Marxism is an appropriate lens for this study, given that Of Mice and Men examines themes of economic oppression, dispossession, and other forms of social injustice. As such, the theoretical perspective of Marxism is particularly salient to the analysis of the novel.
Marxism is a theory he created that explains how society functions as well as the development of human history. Marx (2001) held that all other facets of society are primarily determined by the state of the economy and the structure of the productive system.
Marx's theory describes the characteristics of capitalism, which he believed to be profoundly unsatisfactory and wished to eradicate through a bloody uprising in order to build a communist society. Marx (2001) disagreed with reformers who claimed that a simple shift in ideas could transform society because he argued that prevailing ideas are the outcome of material or economic realities.
Nicholas maintains a platonic relationship with his contract wife by refraining from any sexual interaction with her. He seeks a wife who wills cohabitate with him solely for the purpose of fulfilling a contractual obligation without any emotional or physical expectations.
The contract stipulates that the wife will receive a lump sum payment at the end of the year for fulfilling her role as a contractual wife. The following excerpt details the peculiarities of this proposed contractual agreement between Nicholas and his prospective spouse: A Woman who does not love me. A woman who does not have any animals.
A woman who does not want any children.
A woman who has an independent career.
A woman who will view the relationship as a business venture.
A woman who is not overly emotional or impulsive.
A woman whom i can trust. ( 18)
The extract describes Nicholas's view of marriage as a tool to secure his inheritance rather than a traditional emotional bond. He seeks a woman who will live with him as his wife without the expectation of sexual or emotional intimacy. Nicholas sees marriage as a commodity that can be traded and converted into monetary value, which sets him apart from traditional views of marriage as involving emotion, attachment, responsibility, trust, and cooperation.
According to Marx, the interaction between the forces of production and the relations of production largely determines the kind of society and the course of social evolution. While the latter relates to the social structure of production and who owns or controls the productive resources, the former refers to the technology employed for production. In a capitalist society, the owners of the producing resources are also those who pay the employees. Marx saw that the new social relations of production under capitalism eventually hindered the full development of the new forces of production, leading to contradictions and revolutionary change. David Riazanov, a supporter of Marxism, also emphasized this contradiction in contemporary capitalist society.
Nicholas is in search of a contractual wife who does not require emotional or affectionate attention, and is content with a purely transactional relationship. His main objective is to secure his inheritance of his uncle's properties, and he sees marriage as a tool to achieve this end. Despite having fond memories of Gabriella, a sharp conversationalist who he enjoyed spending time with, he dismisses the idea of marrying her as he fears she is already falling in love with him. According to Nicholas, the supermodel he is currently dating is ideal for social functions and sex, but not for marriage. He is afraid of emotional attachments and seeks a loveless marriage that would only last for a year. Nicholas concluded his quest for a wife when he meets Alexa, who is in dire need of money to save her bookstore from imminent bankruptcy. With the help of Maggie, Nicholas eventually meets Alexa and presents his proposal that she become his wife for a year, which she agrees to. Nicholas explains to Alexa that their marriage is essential for him to secure his inheritance of his uncle's properties.
George Lukacs' theory of totality is essential to his own thinking and the subsequent development of Western Marxism. He had a desire for totality even in his early works, and it became the center of his book "History and Class Consciousness," where it is seen as the core of both Hegel's and Marx's methodologies. Lukacs (2001) warns against being too orthodox in interpreting Marxism and emphasizes the importance of the concept of totality.
The theory of totality is crucial to later Western Marxists' interpretation of the metaphysical tradition and Marx's philosophy and their critique of the modern world. Therefore, understanding Lukacs' theory of totality is helpful in finding the right way to the entire tradition of Western Marxism.
After extensive dialogue and deliberation, Alexa has ultimately decided to accept Nicholas's proposal. The two parties intend to derive mutual benefit from their union, which has been forged out of economic necessity through the execution of a contract to cohabit as spouses. Upon the conclusion of their one-year marriage, Alexa will receive a specified amount while Nicholas will inherit his deceased uncle's assets. Subsequent to this, their marriage shall be deemed null and void. The excerpt portrays how their marriage has been arrived at primarily due to financial considerations: "I am marrying you for business reasons, Alexa. Not your family." Her chin tilted up. He made a mental note of the gesture. Seemed like a warning before she charged into battle. "Believe me, i am not happy about this, either, but we have to play the part if people are going to think this is real." His features tightened but he managed a nod. Fine. His voice dripped with sarcasm. "Anything else?" She looked a bit nervous as she shot him a glance, then rose from the chair and began pacing the room. (30) Alexa accepts Nicholas's proposal due to economic pressure, but is uncomfortable with the idea of being his wife and sharing a home. She feels nervous about the situation. Prior to accepting Nicholas's proposition, Alexa retires to her residence to ponder the proposal. She finds herself in a state of dilemma, as circumstances compel her to opt for an option that is not congruent with her internal preferences. The excerpt sheds light on Alexa's predicament:
The man before her struck out on everything she believed in. This was no love match. No, this was business, pure and simple, and so very cold. While her memory of their first kiss rose from the recesses of her mind, she bet he had forgotten the moment completely. Humiliation wriggled through her. No more, she had her money and could save her family home. But what the hell had happened to her list? (32) When materialistic pressures become overwhelming, concerns regarding morality and social decency are often overlooked. However, women like Alexa are hesitant to enter into loveless and commercialized relationships. It is typical for someone in Alexa's position to consider the potential negative consequences of agreeing to live with Nicholas and to contemplate how such a decision would affect her social status and reputation.
During their discussion about their proposed marriage arrangement, Nicholas and Alexa have a conversation about the topic of sex. Nicholas suggests that they should be discreet about their sexual activities, which causes Alexa to feel shocked and uncomfortable.
To alleviate her concerns, Nicholas explains that he deals with high-end clients and has a reputation to protect, so they must be extremely discreet. Despite feeling odd about Nicholas's proposition, Alexa tries to maintain her composure and not show any change in expression.
Alexa finds herself in a predicament where she is unable to disclose to her parents about her decision to enter into a contractual marital agreement with Nicholas, with the intention of securing funds to rescue her bookstore and support her family. She is constrained to fabricate a falsehood due to her inability to reveal the truth. In her soliloquy one evening, Alexa ponders on the hypothetical response of her family towards her His offer suggested a real relationship between them, and it made her long for more. She should have introduced her family to a real-life love-not a fake. The lies of the night pressed down on her spirits as she realized she had made a bargain with the devil for cold hard cash. Cash to save her family. But cash nonetheless. ( 59) Alexa and Nicholas enter into a contractual marriage to overcome financial difficulties, but they have to lie to their parents about it. Alexa feels guilty about keeping this a secret from her parents but is compelled by their dire financial situation. However, they both treat marriage as merely a means to an end, disregarding its societal significance.
Marx criticized capitalism for alienating workers from their labor and turning them into robotic objects that prioritize profit over human need. He argued that the only way to overcome this alienation and create a democratic, planned society is through a class struggle between the bourgeoisie and the proletariat. However, Luxemburg (2001) questioned the feasibility of abolishing advanced market-based societies and replacing them with a fully planned and controlled society, and criticized the shaky assumption underlying Marx's notion of alienation. Marx's scientific socialism was based on his theory of value and concept of alienation, which exposed the contradictions of capitalism and the necessity of the class struggle.
During their conversation regarding her dire financial situation, Nicholas asks Alexa about the extent of her desire for the money and notes that she does not seem enthusiastic about marrying him and participating in a sham wedding while lying to her family. He questions whether this is all solely for the purpose of business expansion. This inquiry from Nicholas leaves Alexa perplexed, and she considers disclosing the truth to him, that:
The lack of medical insurance to pay the staggering bills. Her brother's struggle to get through medical school while supporting a new family. The endless calls from collectors until her mother had no choice but to sell the house, already heavily mortagaged. And the weight of responsibility and helplessness Alexa carried along the way. "I need the money", she said simply. "Need? Or want?" She closed her eyes at the taunt. ( 60 Alexa's financial desperation leads her to accept the offer of a man who wants to use her for a year in exchange for a sum of money. On the other hand, Nicholas is motivated not by need, but by greed or the desire for money, which leads him to enter into a fake marriage contract.
After a few weeks of living together as a contracted couple, the initial terms and conditions agreed upon by Nicholas and Alexa fade away. Alexa begins to feel the negative effects of their commercialized marriage, while Nicholas tries to follow the predetermined rules. Alexa realizes that telling Nicholas the truth would be self-destructive and instead decides to protect herself from his condescending behavior by cultivating his hatred towards her. She believes that this will allow her to maintain her pride and family's reputation while avoiding his unwanted advances. This shows that when a genuine relationship is based on commercialization and commodification, it can lead to harmful outcomes. Despite Nicholas' failure to uphold their agreement, he crosses a line with Alexa that poisons their relationship, and she takes deliberate steps to safeguard herself from his mechanical passions and sterile affection. Jameson (2005) critiques structuralism in literary criticism for its failure to consider historical context. He advocates for a dialectical criticism that takes into account both synchronic and diachronic aspects of texts. Critics accuse Jameson of trying to create a totalizing theory of interpretation, but Jameson denies making transcendent claims and asserts that his theory is openly ideological and superior to other theories in terms of comprehensiveness.
In order to safeguard her family's reputation, Alexa determines that the most effective strategy is to provoke hatred in Nicholas. She adamantly rejects any notion of accepting pity from him. Despite the transactional nature of their marital arrangement, their innate bodily desires cause them to forget the conditions they had established. They consciously maintain a boundary between them even though they are legally married. However, their repressed sexual impulses periodically manifest themselves, ultimately overpowering and overwhelming them. The following passage illustrates how their suppressed sexual impulses and instincts weaken their resolve and motivate them to transgress the boundary they had established:
Primitive sexual energy swirled between them like a tornado gaining speed and power. His eyes burned with a sheen of fire, half need, half anger as he stared down at her. She realized he lay between her open thighs, his hips angled over hers, his chest propped up as he gripped at her fingers. This was no longer the teasing indulgence of a brother. This was no old friend or business partner. This was the simple want of a man to a woman, and Alexa felt herself dragged down into the storm with her body's own cry. ( 84)
Nick and Alexa fail to uphold the terms of their contractual marriage as they succumb to their sexual desires. Despite Alexa's attempt to make Nick hate her, their attraction for each other is too strong. Their agreement does not constrain the power of human emotions and impulses, which are not rule-bound.
Nick and Alexa experience natural human desires such as the need for intimacy, care, and sexual satisfaction, yet they impose strict rules on their relationship. The restrictions they place on their marriage lead to the manifestation of intense and unfulfilled desires. Nick struggles to reconcile his bodily impulses with the contractual obligations he has made with Alexa, causing inner turmoil. The following passage illustrates this conflict:
Her voice was raspy. Hesitant. Her nipples pushed against the soft fleece with demand. His gaze raked over her face, her breasts, her exposed stomach. The tension pulled taut between them. He lowered his head. The rush of his breath careesed her lips as he spoke right against her mouth. "This means nothing." His body contradicted his words as he claimed her mouth in a fierce kiss. (85) Nick and Alexa's marriage is based on a commercial agreement that does not allow them to seek love or mutual affection. However, they are both overpowered by their passions, and their attempts to control them result in violent and deviant behavior. Despite trying to maintain a distance from each other, they are drawn to one another, and their hunger for sexual satisfaction knows no bounds. As a result, their marriage does not follow the path they expected it to take.
In contrast to other theories, Marx asserts that the reasons for a product being considered a commodity can be traced back to human needs, desires, and practices. In other words, the "use value" of a commodity is determined by its ability to satisfy human wants, while its "exchange value" is dependent on the desire of people to exchange it for something else. Additionally, a commodity's exchange value can only be quantified if it possesses a value derived from the exertion of human labor power, and that value is calculated based on the average labor time necessary to produce similar commodities.
In their marriage, intimacy is a threat to both Nick and Alexa, but they are compelled to stay together due to practical reasons. Their happiness is tinged with fear, and the tension between them is highlighted in a scene where Nick corners Alexa in the kitchen. Despite the threat of a more intimate touch, Nick wants to fulfill his sexual desire for Alexa and keep her as a long-term partner. This is a departure from their initial contractual marriage arrangement, and Nick is surprised by the turn of events. Alexa also seeks to cheat on him, and their marriage is marked by a reversal of normal things.
During their marriage, Alexa announces that she is pregnant, which shocks Nick.
Despite his clear reluctance to have a child, Alexa remains hopeful that Nick's feelings will change with time. This is highlighted in the provided excerpt, where she tries to convince him that he may feel differently in the future. However, Nick is reminded of Gabriella's words, which haunt him. Their marriage began as a business transaction, but the unforeseen consequences of such a union have become a reality. It remains to be seen if Nick will accept the baby that Alexa will give birth to.
In light of the foregoing, it may be deduced that the ramifications of the transformation of human emotions and revered social institutions, such as marriage, into commodities inflict immense suffering upon both Alexa and Nick. The compelling force of human desires renders economic and non-economic incentives irrelevant. The practical truth cannot be disregarded in favor of immediate financial gain.
---
Conclusion
In conclusion, this research sheds light on the commodification of human emotions in the era of late capitalism. The Marriage Bargain by Jennifer Probst illustrates how sacred institutions like marriage are treated as commodities that can be traded and transacted with money. The core finding of this research is that under harsh economic pressures, human feelings and emotions are no longer the pure bonding between two individuals. Alexa, a woman from a respectable family, enters into a contractual marriage with Nicholas for money. While Nicholas is seeking a wife to inherit his uncle's properties, Alexa is compelled to collect money by hook or crook due to her business's financial difficulties. As their marital life proceeds, both Alexa and Nicholas enter into a sexual relationship, and Alexa gets pregnant. The attempt to commodify human emotions incurs hazards and discomforts, ultimately ruining the beauty of human relationships. | 32,492 | 1,772 |
41cccf43c94a023f2a54380d93726aa1e01a6b59 | COMPARATIVE STUDY OF MENTAL WELL-BEING IN TEENAGERS WITH WORKING MOTHERS IN THE PRIVATE SECTOR AND HOMEMAKERS ATTENDING PUBLIC AND PRIVATE SCHOOLS IN LAHORE, PAKISTAN | 2,023 | [
"JournalArticle"
] | Mental health plays a vital role in our ability to think, feel, interact, work, and enjoy life individually and collectively. A person's mental health is affected by several things at any moment, some of which are social, psychological, and biological. Children of working mothers may have different degrees of anxiety, depression, and social problems. Adolescents' mental health has been the subject of countless global studies. Still, less is known about the differences in adolescent mental health between children whose mothers work and those whose mothers do not work outside the home. This research aimed to compare students' mental health in public and private schools in Lahore based on their mothers' employment and its correlation with other sociodemographic characteristics. The research was cross-sectional and included 150 randomly chosen people from many different strata. The collected data were entered and analyzed using SPSS version 26.0. The majority of students we checked attended private schools. The study findings revealed no significant association between the mental health status of adolescents and their mothers' working status, especially in the private sector. However, a noteworthy correlation was observed between mental health status and gender. The average score for mental health assessment was not satisfactory. In conclusion, this research found no statistically significant difference in adolescents' mental health across groups depending on their mothers' employment level. The results indicated that a mother's employment or lack thereof had little impact on her children's psychological health. However, when comparing the mental health of male and female students, there was a clear gender gap. Adolescents' mental health was not significantly affected by factors like their mothers' education, the sort of household they were born into, their birth order, or their parents' monthly income. To learn more about this issue, researchers should investigate how teenagers see their parents' parenting styles in the future. | Introduction
According to the World Health Organisation, mental health is a condition of comprehensive physical, mental, and social well-being rather than just the absence of sickness or disability. The determination of mental health is influenced by many biological, psychological, social, cultural, and environmental elements that interact in intricate manners. The aforementioned characteristics are often recognized as risk and protective factors that impact the mental well-being of people and groups (Mrazek and Haggerty, 1994).Adolescence is often considered a critical period in an individual's life since it significantly influences their future development and outcomes. This is a critical phase characterized by establishing and sustaining social and emotional behaviours that are vital to one's psychological well-being. The strategies above include the adoption of good sleep patterns, engagement in regular physical activity, cultivation of coping mechanisms, problemsolving abilities, and interpersonal skills, as well as the acquisition of emotional management techniques. Supportive settings within the family, educational institutions, and the broader community are equally crucial. According to Kessler (2007) (Kessler et al., 2007), there is a worldwide prevalence of mental health issues among teenagers, with an estimated range of 10-20%. However, it is worth noting that these illnesses often go undiagnosed and get inadequate treatment.In the month of July in the year 2020, it was determined that around 17.6% of individuals between the ages of 11 and 16 had symptoms indicative of a potential mental condition. The prevalence of this statistic increased to 20.0% for those classified as young adults within the age range of 17 to 22. When examining the variations in mental health based on gender, it was found that females were more likely to present with a suspected [Citation Rehman, A.U., Jaffar, F., Rehman, S.U., Elahi, A., Akbar, R., Afzal, A., Mujahid, M.U.F., Devi, J., Mehmood, R., Bilal, M., Ibragim Y., Shaikh R., Afzal A. (2023). Comparative study of mental well-being in teenagers with working mothers in the private sector and homemakers attending public and private schools in Lahore, Pakistan. Biol. Clin. Sci. Res. J., 2023: 414. doi: https://doi.org/10.54112/bcsrj.v2023i1.414] 2 mental condition than males (England and Improvement, 2020). Approximately 50% of mental health illnesses in the adult population seem to manifest during adolescence, namely by age 14. However, many of these cases go unnoticed and get no treatment. Around one-sixth of the global population consists of adolescents, which amounts to around 1.2 billion individuals between the ages of 10 and 19. Depression ranks as the primary contributor to morbidity and impairment in the teenage population, while suicide stands as the third leading cause of mortality. The World Health Organisation (Sunitha and Gururaj, 2014) has identified that exposure to violence, poverty, humiliation, and feelings of devaluation might heighten the susceptibility to experiencing mental health issues. According to the findings of the 2017 Mental Health of Children and Young People (MHCYP) survey conducted in England, it was observed that 15.3% of individuals aged 11-19 exhibited symptoms indicative of at least one mental health condition. Additionally, 6.3% of this demographic matched the diagnostic criteria for two or more mental illnesses. In 2017, the prevalence rates across the age groups of 10-12 and 12-14 exhibited little change. However, notable disparities in prevalence rates emerged when the factors of both sex and age were considered. The prevalence of mental problems was higher among girls aged 17-19 (23.9%) than males (10.3%). The data from 2020 substantiates the observed disparity, indicating that likely mental problems are more prevalent among older teenage girls (27.2% among females aged 17-22) compared to boys (13.3%) (Mandal and Mehera, 2017).The presence of depression in children is a significant health concern that has a profound impact on their overall development. Major depressive disorder is characterized by a chronic feeling of a dysphoric mood and a diminished interest or pleasure in almost all activities. These emotions are accompanied by various supplementary symptoms that impact food and sleep, activity and focus level, and self-value perceptions. Parents have a significant role in the formation and development of subsequent generations. During adolescence, peers have a crucial role in facilitating the assimilation of values and the acceptance of cultural norms. Additionally, they contribute significantly to promoting healthy emotional and psychological growth in children, ultimately fostering their development into successful individuals. The significance of a mother's role stems not from her unique talents but rather from the substantial amount of time she spends with her children, which allows her guidance to profoundly impact their attitudes, abilities, and behavior. The extent of a mother's dedication to childcare is often assumed to be significantly impacted by her level of economic activity. Temporal limitations result in a reduced availability of childcare for employed women compared to their nonemployed counterparts. The mother assumes the responsibility of making daily choices, guiding her children as they grow, and equipping them with the necessary attributes of bravery and comprehension to confront life's challenges. Ensuring her children's nourishment and proper care is within her jurisdiction. She must provide training that enables individuals to progress according to societal norms and expectations. The individual in question has been endowed by a higher power with the inherent skill and aptitude to provide vitality and inspiration to subsequent cohorts. The advancement seen in industrialized nations may be largely attributable to the significant contributions made by women in such societies (Shah, 2015). Most children who succeed and exhibit a sense of security tend to originate from households characterized by positive parental attitudes and a nurturing parent-child interaction. Mothers provide their children with love, affection, and care from the moment of their birth. The provision of childcare services has emerged as a significant concern in several nations around the globe. It is well acknowledged that a mother figure's affection and care are essential for children's well-being and development. According to popular belief, the family serves as the first educational institution, with the mother assuming the role of the primary educator for each child. During ancient times, particularly under conventional family structures, women were primarily responsible for childcare and domestic duties. The individuals were prohibited from leaving their residences for employment purposes. The responsibility for generating income via breadwinning was exclusively shouldered by male members within the family unit. Mothers dedicate significant effort to fostering good personality traits, uncovering latent abilities, and facilitating effective coping mechanisms in challenging circumstances (Shrestha and Shrestha, 2020). Children can form a solid relationship with their biological mother and other members of their immediate family. A growing phenomenon of women joining the labor market is driven by economic constraints or a desire to establish their sense of self. This phenomenon has resulted in a significant transformation of the conventional role of mothers from being primarily responsible for caregiving to assuming the position of primary income earners. Consequently, this shift has also changed the objectives and methods of child upbringing (Rohman, 2013). Based on the Lahore Education Statistics of 2007-08, the total count of female instructors in Lahore was 679,503. In 2015, 773,332 were recorded, signifying a notable rise in the population of female teachers. This increase may be attributed to several causes, the predominant being the societal perception that teaching is a suitable vocation for women. Female educators can allocate much time to their families while fulfilling their professional responsibilities. Another significant element is that the educational policies implemented in Lahore over the years have prioritized the enrolment of women in the teaching profession. This has been achieved by providing supplementary incentives targeting women (Shrestha and Shrestha, 2020). Balancing the obligations of work with the duties of familial life is a well-known struggle encountered by parents raising children in the contemporary day. The market has shown a response to the increasing presence of working women with small children, prompting the ongoing development of work-life programs aimed at catering to the diverse demands of all workers. However, there is still little understanding of the unique work-life experiences of working parents who have children with special needs (Syed and Khan, 2017). Approximately 20% of families consist of children who have particular health or mental health requirements.
[Citation Rehman, A.U., Jaffar, F., Rehman, S.U., Elahi, A., Akbar, R., Afzal, A., Mujahid, M.U.F., Devi, J., Mehmood, R., Bilal, M., Ibragim Y., Shaikh R., Afzal A. (2023). Comparative study of mental well-being in teenagers with working mothers in the private sector and homemakers attending public and private schools in Lahore, Pakistan. Biol. Clin. Sci. Res. J., 2023: 414. doi: https://doi.org/10.54112/bcsrj.v2023i1.414] 3
---
Methodology
A cross-sectional study methodology was used to quantitatively examine the mental health of adolescents with working and non-working moms who are enrolled in public and private schools. The study was conducted at private schools in Lahore. The schools were selected through stratified random sampling. The research sample consisted of teenagers enrolled in private schools in Lahore, with moms who were either employed or not employed. The sample was chosen based on certain criteria for inclusion and exclusion. The data were obtained via a self-administered questionnaire provided to the participants. A Performa was devised to gather data about the socio-demographic characteristics of the participants, as well as conduct a mental health evaluation. The questionnaire was developed using an adapted tool from two validated tools. The primary objective of the questionnaire was to evaluate the mental well-being of teenagers with moms who are employed and those who are not employed. The dependent variable in this study was the mental health of teenagers, which was assessed using a modified measurement instrument. Data on independent variables was collected through a self-administered questionnaire constructed after an international and national literature review. The Performa included socio-demographic variables such as gender, age, institute, mother working status, etc. In addition, it also included some variables related to the mental health assessment, such as the mother's education and participation in extra-curricular activities. Type of family, number of siblings, school environment, etc. Before starting the formal data collection procedure, pilot testing was performed by including 10% of the sample size. Performa was tested for future changes; no major changes were made after pilot testing. One question was added in the demographic section: the number of siblings. Data from pilot testing was not included in the final analysis. The data were obtained via self-administered questionnaires without the involvement of paid data collectors. The study included recruiting adolescents from households with working and non-working moms. Oral consent was obtained from all participants, and only those who provided their agreement to participate in the study procedure were chosen. After obtaining the consent, the participants were given a self-administered questionnaire, and the researcher recorded their responses. Data collection was completed in approximately two months. All filled questionnaires were kept protected in plastic files, and no one had access to them other than the researcher. The codebook was established, and the data were inputted into the Statistical Package for Social Sciences (SPSS) version 26. Following meticulous data input, the data underwent a thorough error-checking process before continuing with further analysis. Following the process of data cleansing, certain variables underwent data transformation. The data analysis process was conducted in two distinct stages, namely descriptive analysis and inferential analysis. Socio-demographic factors were used to obtain descriptive statistics. The categorical variables were summarised by calculating the frequencies and percentages and then presented in a tabular format. The mean and standard deviation summarised continuous variables, assuming a normal data distribution.
---
Results
A total of 150 responses were included. A self-administered questionnaire was used. Out of 150 respondents (67) were boys and (33) were girls. Most of the 150 respondents were 15 years of age group (13.5%). All of the students were from private schools. Of the total number of respondents, 36.0% were those students whose mothers were working, and 64% were those whose mothers were non-working. An adapted questionnaire was used to assess adolescents' mental health (MHA). The outcome variable was the mental health assessment of adolescents. Although females were targeted slightly more than males, there was no significant difference, as shown in Figure 1. Therefore, according to the results, there is no major difference in the categories of males and females targeted, and further results show that the working and non-working categories of participants also do not have much of a difference. An adapted questionnaire was used to assess adolescents' mental health (MHA). The outcome variable was the mental health assessment of adolescents. Although females were targeted slightly more than males, there was no significant difference, as shown in Figure 1. Therefore, according to the results, there is no major difference in the categories of males and females targeted, and further results show that the working and nonworking categories of participants also do not have much of a difference
---
Male
---
Discussion
The current research aimed to evaluate the mental health of teenagers with both working and non-working moms. Adolescents' mental health was assessed using tools adapted from previous studies. The study was conducted at private schools in Lahore city. A stratified random sampling was used, and schools were selected through a lottery method.
Pilot testing was performed before the formal data collection procedure, including 10% of the sample size (150). Reliability was checked after entering data into SPSS.
The current investigation demonstrated a statistically significant correlation between teenagers' mental wellbeing and gender. No statistically significant correlation was observed between the mental health of teenagers and demographic variables such as age, educational institution, monthly household income, and others (Singh et al., 2020).
The present investigation revealed a marginal disparity in the mental health condition of adolescents with working and non-working moms. However, this discrepancy did not reach statistical significance. The preceding research in India revealed a notable disparity in the mental well-being of adolescents with working moms compared to those with non-working mothers. The obtained p-value indicated no statistically significant difference in the mean score of psychosocial disorders, as reported by Koirala in 2016.One potential reason for the observed findings might be that employed moms dedicate less time to their children, which may hinder the development of emotional bonds between them (Berghuis et al., 2014). Working women face the challenge of balancing their responsibilities in both the household and professional spheres, resulting in heightened stress and anxiety levels within their home life. In such circumstances, individuals will remain unaware of the changes in children's moods and behaviors. Therefore, as a result of these factors, the mental health condition of adolescents with working moms was shown to be worse in comparison to those with non-working mothers. The current investigation demonstrated a statistically significant correlation between teenagers' mental well-being and gender. There was a statistically significant difference in the mean scores for evaluating teenage mental health between male and female pupils (Mahmood and Iqbal, 2015). The preceding research done in Islamabad (Lahore) revealed a notable disparity in the psychological adaptation of pupils.
The findings indicated a statistically significant correlation between males and females. According to Dr. Khalid Mahmood (2015), there is evidence suggesting that females exhibit greater psychological adjustment than males. Further research done in Lahore similarly indicated that there was no discernible correlation between teenagers' psychological well-being and their moms' employment status (Mahmood and Iqbal, 2015). The study revealed a lack of statistically significant disparity between male and female offspring of employed moms. Furthermore, a lack of correlation was seen among the offspring of moms who were not employed. The findings of the current research indicate that there is no statistically significant association between the mental health of teenagers and the educational level of their mothers. The present research observed a positive correlation between the educational attainment of mothers and the mental health evaluation scores of their children. The underlying cause of this problem stems from the limited opportunities for children of employed moms to interact with their peers and community members. Employed mothers have allocated less time to engage with their children. The limited availability of time has been shown to have a detrimental impact on the mental wellbeing of children, manifesting in challenges related to communication, attention, emotional comprehension, and the fulfillment of their needs. The limited scope of their social environment may be attributed to the insufficient amount of time that mothers can dedicate to their children's leisure and socialization (Van Droogenbroeck et al., 2018). The present research also revealed a marginal disparity between mental health evaluation and monthly income level; however, this discrepancy did not reach statistical significance. The preceding research done in Germany revealed a notable disparity between the mental well-being of teenagers and the monthly family income. The study conducted by ReissID (2019) found a substantial negative correlation between higher levels of family income and the prevalence of mental health disorders. One potential rationale for conducting the present investigation may be attributed to the smaller sample size compared to the prior study. The present research observed a marginal distinction in the mental well-being of teenagers based on family type; however, this difference did not reach statistical significance. The current research revealed that the average score for the nuclear family type was discovered to be, while for the joint family type, it was observed to be. The mean score of mental health evaluation was somewhat higher among teenagers from nuclear family types compared to those from joint family types. This observation suggests that adolescents from nuclear families may have better mental health outcomes than their counterparts from joint families. One potential explanation for these findings is that adolescents living in nuclear family structures may have limited opportunities for interpersonal engagement with extended family members, leading to decreased socialization (Smithson and Lewis, 2000). The present research also observed a marginal difference between "mental health assessment and engagement in extracurricular activities"; however, this distinction did not reach statistical significance. The mean and standard deviation for involvement in extra-curricular activities were determined.
The previous research done in Brazil showed a noteworthy correlation between the evaluation of mental health and engagement in extra-curricular activities (Reverdito, 2017). One potential explanation for the observed outcome may be attributed to the smaller sample size used in the present research compared to the earlier investigation. The marginal distinction between mental health assessment and engagement in extra-curricular activities suggests that children participating in such activities can cultivate their social skills, critical thinking abilities, leadership qualities, time management proficiencies, and collaborative aptitude to pursue a collective objective (Reiss et al., 2019). The current investigation revealed a lack of statistically significant correlation between the evaluation of mental health and the quantity of siblings. The present investigation yields comparable findings about evaluating mental health and the influence of sibling count, as seen in prior research. The preceding research in Japan showed a lack of statistically significant correlation between mental health evaluation and the number of siblings (Liu, 2015). The rationale for doing this research may be attributed to the finding that the number of siblings did not provide statistically significant impacts on mental health. This suggests a multifaceted association between the kind of siblings, gender, age, and variations among siblings (Liu, 2015). The present research findings indicate a positive link between the number of siblings and the mental health scores of teenagers. Specifically, it was observed that as the number of siblings grew, the mental health scores of adolescents also increased. This suggests adolescents with fewer siblings tend to exhibit better mental health (Reverdito et al., 2017). The current investigation observed no statistically significant correlation between the evaluation of mental health and birth order. The present study's findings align with those of other research in terms of the relationship between mental health evaluation and birth order. The research done in Japan demonstrated no statistically significant correlation between birth order and mental health evaluation (Liu, 2015). The majority of children included in the present research were found to be middle children. The rationale for doing this research may lie in the observation that middle children often experience a desire to vie for parental attention as they find themselves between younger and older siblings (Kessler et al., 2007).
---
Conclusion
The present research has shown that there exists no statistically significant disparity between the mental health state of adolescents and the employment position of their mothers. Overall, the research findings suggest that both working and non-working moms have no significant impact on the mental well-being of their offspring.
---
Declarations
---
Data Availability statement
All data generated or analyzed during the study are included in the manuscript.
---
Ethics approval and consent to participate
Approved by the department Concerned.
---
Consent for publication Approved
---
Conflict of interest
The authors declared absence of conflict of interest. | 23,575 | 2,060 |
f4ccf22614c320f96212cfada0b6b133ed8dd974 | Structuring Qualitative Data for Agent-Based Modelling | 2,015 | [
"JournalArticle"
] | Using ethnography to build agent-based models may result in more empirically grounded simulations. Our study on innovation practice and culture in the Westland horticulture sector served to explore what information and data from ethnographic analysis could be used in models and how. MAIA, a framework for agent-based model development of social systems, is our starting point for structuring and translating said knowledge into a model. The data that was collected through an ethnographic process served as input to the agent-based model. We also used the theoretical analysis performed on the data to define outcome variables for the simulation. We conclude by proposing an initial methodology that describes the use of ethnography in modelling. | Introduction 1.1
Building empirically-grounded artificial societies of agents requires qualitative and quantitative data to inform individual behaviour and reasoning, and document macro level emerging patterns (Robinson et al. 2007). While quantitative data can be collected through surveys, literature and other available sources, gathering qualitative data to design the behaviour of the agents, their decision making process and their forms of interaction is not a straight-forward task (Janssen & Ostrom 2006). Likewise, macro-level data for model validation requires theoretical analysis about the system that is being modelled (Robinson et al. 2007).
1.2 Modellers commonly use behavioural and social theories, and desk research to cover the qualitative aspects of agent-based models. They may also use surveys and statistical analysis to understand the decision making behaviour of individuals (Sanchez & Lucas 2002;Dia 2002).
---
1.3
One field of research that can also be used to collect data for agent-based models is ethnography (Bharwani 2004). Ethnography is a research method covering many approaches in anthropology. The data is gathered through interviews and field surveys which are then 'coded' [1] for theoretical analysis. The collected data is a rich set for understanding human behaviour and interaction which is also a good source to build artificial humans or agents. Furthermore, the theoretical analysis that is performed on ethnographic data could be a good source of macro level data for model validation by observing whether the same mechanism and patterns concluded from the analysis result from the simulation (Robinson et al. 2007).
---
1.4
Since ethnography provides a rich set of data about the system and its entities, we anticipate it can be used to make richer agent-based models populating them with empirically grounded data. However, this data, although coded for theoretical analysis, is difficult to interpret and decompose in order to build agents and their behavioural rules. Ethnographic data is normally in textual format obtained from interviews, fieldwork, participant observation or formal documents (Yang & Gilbert 2008).
---
1.5
The difficulty in making use of ethnographic information for agent-based modelling and simulation (ABMS) is due to the fact, that in qualitative ethnographic research the interviewees are normally allowed to talk about their concerns in an open manner, which may lead to an overload of information that may also be immensely rich and diverse in terms of content. In addition, the researcher and the interviewees each have their own world-view, which leads to bias, as abstraction and generalization is required to arrive at specifications of behaviour and characteristics suitable for building agent-based models.
---
1.6
The most complete research in the intersection between ABMS and Ethnography is Bharwani (2004). Bharwani (2004) provides a detailed procedure for the fieldwork process which describes how ethnographic data is collected and formalized. Bharwani (2004) used knowledge engineering techniques in the process, allowing a continued engagement with the interviewees. She designed a specific ontology (i.e., architecture) for her particular domain namely, Agro-Climatic systems, to decompose the ethnographic information into a model. Yang and Gilbert (2008) discuss the differences and similarities between ethnographic data and ABMS and propose recommendations for modellers when using ethnographic data. They emphasize on the requirement for computer-aided qualitative analysis to manage and structure the data. Another requirement indicated by them is a model of data to represent relationships among actors (Yang & Gilbert 2008).
---
1.7
There are also case specific examples of using qualitative data in agent-based models. Geller and Moss (2008) present a model of solidarity networks in Afghanistan, informing agents' structures, behaviour and cognition by qualitative data. They use an evidence-based approach following rules according to which agents behaviours are directly drawn from empirical studies. Moore et al. (2009) use a combination of ethnography and ABMS to study psychostimulant use and related harms. They also indicate the difficulty in generalizing ethnographic information to build agent-based models. They built a model called SimAmph as a shared ontology to combine ethnography and ABMS for their particular case, which proved to be useful in making the connection between the two domains as well as in facilitating collaborative model development and analysis.
1.8 Thus, from the literature, it appears that a shared ontology or a conceptual framework is one of the main requirements for generalizing and structuring qualitative information, especially ethnographic data for ABMS. To address this requirement, in this research, we use an ABMS framework called MAIA (Ghorbani et al. 2013) which provides a shared ontology for social systems, covering a diversity of social, institutional, physical and operational concepts that are required for building agent-based models. Using MAIA as a template of required concepts may help collect and structure ethnographic data for building agent-based models. Therefore, in this research, we explore this possibility by using this modelling framework to structure ethnographic data collected from interviews, fieldwork and formal documents to build an agent-based model. To underpin this possibility, we use a case study on innovation practices in the Dutch horticulture sector.
---
1.9
The remainder of this paper is as follows. In Section 2, we give a brief overview on ethnography and introduce the MAIA framework. In Section 3, we introduce the horticulture case study. In Section 4, we explain the methodological process of integrating ethnographic processes into ABMS. In Section 5, we discuss the lesson learnt from this process and analyse our methodological process. Finally, we conclude in Section 6.
Background
---
2.1
The goal of this research is to propose a methodology for using ethnography to build agent-based models. In this section, we will first explain ethnography. Then, we will introduce the MAIA framework, which will be used as the tool for this methodological process.
---
Ethnography
---
2.2
Ethnography is a field of science that spans many methods and schools of approaches in anthropology. The power of ethnographic research is that real people are studied at the level of small communities/groups or individuals, and at the societal level, while the mutual interaction is also considered. This qualitative research aims to address complex phenomena by analysing and interpreting the system from the participants' point of view. Ethnography is often exploratory in nature, using observations to construct the analysis from 'bottom-up'. Together, this appears to be what is needed for developing agent-based models, in order to characterize the interaction of the individual and the system:
Ethnographic research can range from a realist perspective in which behaviour is observed to a constructivist perspective where understanding is socially constructed by the researcher and subjects. Research can range from an objectivist account of fixed, observable behaviours to an interpretivist narrative describing "the interplay of individual agency and social structure." Critical theory researchers address "issues of power within the researcher-researched relationships and the links between knowledge and power (Ybema et al. 2010).
---
2.3
In ethnography there are several types of methodologies, which can broadly be categorized as either inductive or deductive. An inductive approach to ethnography formulates theories from the 'bottom-up' rather than from the 'top-down'. This means that the researcher starts by observing the community and by looking for repeated patterns of behaviour. If certain themes continue to appear, the researcher can develop a tentative hypothesis that is then verified and which may be turned into a theory. This may require the collection of more corroborating data from other communities within the same society [2] . 'Grounded theory' is an inductive method of analysis commonly applied in ethnography to help scientists generate theories (Corbin & Strauss 2008). Unlike other theories, grounded theory does not start by hypotheses for social behaviour but concludes with them. The grounded theory approach is an iterative process where the analysis of the data may raise new questions that stimulate new data collection (Neumann 2014). While this describes inductive research, some anthropologists also take the deductive approach, using prefixed questionnaires, hypothesis, quantitative data and statistics etc.
---
2.4
The inductive approach is more flexible, however, when it comes to addressing human societies, as it helps the researchers let go of their own preconceived (and often culturally biased) ideas of what the society they are studying is like. While the inductive approach is still used in cultural anthropology today, currently this theory has shifted from 'start fieldwork and wait for answers' to 'start field work with a few general questions to answer'. This would provide enough frameworks to focus the research, but would leave the questions general enough to allow for the flexibility that studying human culture needs. Some methods play a central role in this inductive approach:
Open-ended and semi-structured interviewing: semi-structured interviews are open-ended, but the interview is guided by a list of topics [3] . Such interviews allow discussions that have not been prepared for, while the list guides the discussion. Together, this renders the interview to be both efficient and effective. Participant observation and field work: this method is the foundation of cultural anthropology, and entails the residence of the researcher in a field setting, where the observer blends into the daily life of the people and may closely monitor their activities.
---
2.5
The data produced in ethnography is a combination of written interviews, recordings, documents and personal notes. Structuring, analysing, interpreting and presenting the data is therefore an important step. The richness of data from ethnographic studies can be organized in programs like Atlas.ti [4] . In the analysis process, the next step is to generate categories, themes and patterns from the organized data. The processed and organized data can then be inspected and interpreted, and theories can be used to frame and analyse the data to elucidate patterns and give meaning and explanation to the data.
The MAIA Framework 2.6 MAIA (Modelling Agent systems based on Institutional Analysis) is a modelling framework that structures and conceptualizes an agent-based model in a high level modelling language (Ghorbani et al. 2013). The concepts in the framework are a formalization of the Institutional Analysis and Development (IAD) framework of Elinor Ostrom (2009), extended with concepts from other social science theories (Structuration (Giddens 1984), Social mechanisms (Hedström & Swedberg 1996) and Actor-centered institutionalism (Scharpf 1997).
---
2.7
MAIA has been designed to support the participatory development of agent-based simulations. Since its concepts are taken from various theories, this modelling framework can be used by inexperienced modellers and those who are not familiar with programming skills. Furthermore, an online tool [5] supports the conceptualization process of agent-based models. In this tool, the MAIA model (i.e., the conceptual model developed using MAIA) is observable and traceable through cards and diagrams and can therefore be used for communication with domain experts and problem owners for concept verification. MAIA has been evaluated in several projects (e.g., transition in consumer lighting, the wood-fuel market, e-waste recycling sector, and manure-based bio-gas energy system) (Ghorbani 2013).
---
2.8
The framework provides a guideline to arrive at a comprehensive overview if not model of a social system by defining five interrelated structures that group related concepts:
1. In the Collective structure actors are defined as agents by capturing their characteristics and decision criteria based on their perceptions and goals.
2. The Constitutional structure defines roles and institutions. Actors can take multiple roles in social systems. These roles are formalized as unique sets of objectives and capabilities. Roles allow efficient modelling of heterogeneous agents who perform similar tasks. Institutions are defined as the set of rules devised to organize repetitive activities and shape human interaction (Ostrom 1991). In MAIA, institutions are defined using "ADICO grammar of institutions" proposed by Crawford and Ostrom (1995). In ADICO, 'A' is the attribute or the actor who is the subject of the institution, 'D' is the deontic type of the institution (prohibition, obligation, permission), 'I' is the aim of the institution, 'C' is the condition under which the institutional statement holds and 'O' is the sanction for non-compliance to the institution. 3. The Physical structure is the non-social environment that the agents are embedded in. Its building blocks are physical components. 4. The Operational structure is viewed as an action arena where different situations take place, in which participants interact as they are affected by the environment. These produce outcomes that in turn affect the environment. The agents, influenced by the social and physical setting of the system, perform their actions in the action arena. The action arena contains all the entity actions, ordered by plans, which are in turn ordered by action situations. 5. The Evaluative structure provides concepts with the help of which the modeller can indicate what patterns of interaction, evaluation, and outcomes she is interested in. The modeller identifies those variables that can serve as indicators for model validity (is it sufficiently realistic?) and model usability (will its implementation help me to explore the question(s) I set out to address?).
Figure 2 at the end of this article shows the concepts in MAIA. Extensive specification of MAIA can be found in Ghorbani et al. (2013).
Case Study: Horticulture Innovation
---
3.1
The key objective of our study of the horticulture sector is to elucidate the effects social institutions have on innovation practices in Westland, a region that is home to about 70% of all greenhouse acreage in the Netherlands.
---
3.2
The horticulture sector in the Netherlands at large is facing economic difficulties, which have become more severe since the crisis begun in 2008 (Schrauwen 2012). The dominant presence of innovation strategies that target cost-reduction and volume-increase brings down the cost of products. They fail to bring the growers sustained benefits however, which causes serious problems in the sector. Due to mechanisms in the market, the growers only benefit financially from their innovations for a relatively short period. When their innovations spread in the sector, the market price of their products drops rapidly, because it is subject to fierce price competition, a characteristic of 'cost leadership' market segments. Few growers attempt to increase the value of their products by developing niche product-market combinations, or expand their activities in the value-chain by developing new channels to the market to capture a greater share of the value created between growers and consumers. Such innovation strategies beyond process innovation for unit cost-price reduction are less popular in the sector, despite their potential to counteract the effect of downward spiralling prices in competitive markets.
---
3.3
The goal of this study is to investigate the innovation practices in the Westland horticulture sector to obtain an understanding on how this observed pattern of innovation has emerged and how the underlying behaviour of growers is shaped and maintained. We use grounded theory as our methodology to perform ethnographic field work. Besides using MAIA for data collection and model development, we perform a theoretical analysis using the Bathtub model of Coleman (1986) and several other theories (see Schrauwen 2012). The rationale for adopting a fieldwork approach (rooted in cultural anthropology) is that the organizations and innovation practices are socially embedded, and can be studied as such. Furthermore, the Westland is said to be home to Westlanders who share a common identity with respect to social and business culture, which is shaped by and has shaped their core business for centuries (Kasmire et al. 2013).
The Modelling Process
---
4.1
The purpose of our methodological practice is to guide the collection of data for building an agent-based model using an ethnographic approach. This process is divided into two parts. The first part uses MAIA as a template for information collection, which includes field observation, interviews and the study of formal documents. For each of these methods, we make use of the MAIA framework to semi-structure the data collection process. The second part uses the collected information to build a MAIA model.
---
Collecting data using MAIA
Structuring interviews with MAIA
---
4.2
In inductive ethnographic research, interviews are normally semi-structured. Therefore, it is common practice, to develop a general structure or guideline for the interviews, to ascertain that at least all relevant aspects are addressed. We use MAIA as the general structure for the interviews in order to cover all the information required to build an agent-based model. At the same time, we leave the questions open-ended, so that the interviewees feel free to talk about what may seem relevant to them.
---
4.3
The interviews were conducted with various stakeholders in the Westland horticulture sector (Schrauwen 2012):
Experts: Experts were interviewed to gain better insight into the sector as a whole and also to evaluate the assumptions that were being made during the analysis and modelling phase. Growers: Fifteen growers were visited at their organization. Each interview took between two to five hours. The growers were either contacted directly or introduced by other respondents. Organizations: The bank, churches, educational institutes, municipality, LTO GlasKracht and supermarket were the other actors interviewed in order to find out their influence on the social network of growers, their individual capital and investment, and their knowledge and background.
---
4.4
The concepts that were used to structure the interviews and direct the questions are:
-Collective Structure Agent Decisions: What decisions do the growers make regarding their innovation practices? The growers are allowed to talk about their decisions freely without being forced to explain how they make those decisions [6] . Agent personal value: The growers are asked about what they care about most when they are making those decisions. Related Agents: During the interviews, the growers are asked about other social entities they may be interacting with. These can be individual actors, such as other growers, or composite actors (i.e., organizational type) such as the bank, or the municipality.
-Operational Structure
Actions and Plans: The growers are asked about what their general activities are and how often they perform these activities. In this case study, they were asked about their daily, monthly and yearly activities. If each of these practices constitutes a process, they were also asked about the events that take place in that process. For example, if a grower decides to apply for a subsidy, what actions does he have to perform during the application process?
-Constitutional Structure Roles: The growers are implicitly asked about the different roles they take in their activities. This is not a straightforward question, but one that would rather need to be extracted from the explanations the growers provide. For example, a grower explains that he has to be a client of the bank to apply for a particular subsidy or he emphasizes that he would only expand his greenhouse, if he has a child who is willing to take over. From these remarks we can identify 'bank client' and 'being a father' as two of the roles, the growers may assume under certain condition. Formal Institutions: While asking about the operational activities and decisions, the subjects are also asked about the formal procedures, rules and regulations they need to go through. This is later used to collect relevant institutional documents.
-Physical Structure Physical Components: During the interviews, the subjects are asked about the physical entities they use in their activities, the ones they own or the ones that influence their actions. It is important to ask about this aspect; while the interviewee is talking about the activities he performs in order to limit the information to what is relevant.
The interviews are recorded and coded in Atlas.ti for later analysis.
http://jasss.soc.surrey.ac.uk/18/1/2.html 2 22/10/2015
Using MAIA for field observation 4.5 During field observation, it is important to identify the relevant properties of the entities (i.e., agents and physical components) that are addressed during the interviews. The composition of the physical entities and their connections may be observed in the field and defined as physical components in the physical structure of MAIA. Thus, in a fashion similar to setting up the general structure for the semi-structured interviews, the MAIA structures can be used as a template for collecting data during field observation.
Using MAIA for studying formal documents
---
4.6
The formal documents are collected according to the information provided by the subjects. To collect the right information for modelling institutions, the ADICO structure (see Section Background) is used as the template.
Building a MAIA model 4.7 Upon completion of the previous steps, the collected data is used to build an agent-based model. This process is conducted by extracting relevant information from the data by using the MAIA framework. Again, we look at the structures one-by-one to clarify the process [7] .
Collective Structure
---
4.8
The interviewed subjects can be defined as agent-types. Each subject can be defined as one separate agent-type if the simulation is limited to the people interviewed; alternatively, one may group the agents according to some criterion and use each category to define a separate agent-type. In the greenhouse case, the 15 growers that were interviewed were divided into five categories distinguished by their stated priorities, their physical assets and characteristics. The first category is the niche growers whose greenhouse is relatively small in size and whose innovation activities are mainly marketing-and product-oriented.
The other four categories are large bulk growers, the innovative bulk growers, moderate bulk growers and shop growers (see Schrauwen 2012).
---
4.9
Agents in the simulation are not limited to the interviewees; there may also be social entities that were addressed during the interviews. For example, the European Union was a social entity addressed by the growers, who influences their innovation strategies. This entity is, therefore, also defined as an agent in the simulation.
4.10 From the qualitative data, whether in the form of field observation or interview, the properties, personal values, intrinsic behaviours and decision-making of the actors are extracted to build the agents in the model.
---
Constitutional Structure
4.11 The main aspect of the constitutional structure is the institutions. These can be formal institutions extracted from legal documents, or informal institutions, namely, norms of behaviour and shared strategies extracted from the interviews or field observations. The patterns of behaviour observed from interviews can be the result of rules imposed by the society. These are defined as norms or shared strategies. If the rule of behaviour contains an obligation or prohibition by definition, the rule is considered to be a norm. If the actors perform the same routine without any obligation from the system, that routine can be considered as a shared strategy. All the formal and informal institutions are modelled as ADICO statements as defined in Section Background. Table 1 shows some of the institutions extracted from the interviews and legal documents. 4.12 Similar to building agents, the physical entities that are addressed by the interviewees are extracted from the text and defined as physical components in the MAIA model. These include energy, greenhouse and machinery (i.e., the innovative technology they adopt). The properties of these components are identified through field observation in addition to interviews. For example, during field work, it became clear that two properties, namely, the size of the greenhouses and their type of crops, mainly distinguish growers from each other.
Operational Structure 4.13 The events that were described by the interviewees are defined as actions in MAIA. The condition for performing those actions and the outcomes of the actions should be extracted from the descriptions the subjects provide. The described sequence of actions helps to define agent plans in MAIA. Finally, the modeller has to make a decision about the time loop and the actions that take place per tick. For this study, we decided that in each tick, seven action situations take place according to the following sequence:
Daily life: In this action situation, the intrinsic capabilities of actors take place: being born, die, have a child, learn and start relationships.
Cooperating: Within the action situation of cooperating, growers can group together and make a joint decision on investments in innovations. Also, knowledge, norms and values are shared amongst growers that are cooperating, adding up to the social capital of the growers. GMO: In this action situation, growers request GMO (Gezamenlijke Markt Ordening -collective market structuration) subsidy where they may recover half of the investments. GMO applications can either be accepted or rejected. Previous subsidy receivers may also be punished in this action situation, based on their previous actions. Loan: In this action situation, the grower can apply for a loan. He has to pay back his loan and report his money level to the bank, who may take over, when the grower is in trouble.
Innovating: In the innovation situation, the decisions are made by the growers to invest in one of the categories of innovations. They invest their money in that innovation, while adopting a new physical component (i.e., technology) in their greenhouse with specific characteristics. Cultivation: In the cultivation situation, all horticulture-related activities are performed such as cultivation, employing technologies, and increasing efficiency. The investments of the previous round of innovations affect the cultivation process and produce outcomes, in terms of products, efficiency, use of inputs, etcetera. Also, the money level is checked and reported to the bank (if the grower is a member).
Selling: In the selling situation, growers calculate the costs and value of their products and calculate a market price. They sell their products to the merchandisers. Products are exchanged with money.
Evaluative Structure 4.14 To build the evaluative structure of MAIA, not only the data collected was used, but also the anthropological analysis. We defined a set of variables that can be used to measure and study the possible emergent system elements from the simulation according to this analysis.
---
4.15
The theoretical analysis showed that a phenomenon called 'isomorphism' steers companies towards the same characteristics which gives rise to similar innovation practices that are not effective in the long run and may even harm the sector. To explore this phenomenon in the simulation, we defined the variable 'homogenization' to calculate the variation in innovation types. This value would be measured through time.
The correlation between subsidies and this variable is also identified as a parameter of interest according to the ethnographic analysis.
4.16 One other issue in the analysis was 'decreasing product value'. Many products, especially bulk products, are sold with little margin. This means that the income flowing back to the grower is at risk of being less than cost, which decreases their capital. With just one innovation not giving good returns, this may put them in danger. This may even cause bankruptcy. Therefore, another variable to keep track of in the simulation is the developments of product value (i.e., product price) in relation to time and different innovation types.
4.17 The sector's sustainability is another point of interest in the study. This issue stands on three different pillars, namely, economical, ecological and social. To experiment with these pillars in the simulation, for the economical part, the ratio between product value and bankruptcy is calculated in relation to subsidies, loans and time. For the ecological aspect, the relation between water, energy and nutrient, and amount and value of products is defined as a metric. Finally, to track the social influence, we define two variables: social capital and bankruptcy.
4.18 In this section, we presented an overview of the process of ethnographic data collection and analysis used for conceptualizing an agent-based model of the horticulture sector. We explained how MAIA concepts can be used to inform data collection, and to build an agent-based model. In the next section, we will generalize this methodological procedure, to make it applicable to other social studies.
Generalizing the Process http://jasss.soc.surrey.ac.uk/18/1/2.html 3 22/10/2015
---
4.19
Figure 1 shows the general process of using ethnographic data to build an agent-based model using MAIA. Some concepts in the MAIA structures, as illustrated on the left side of the figure, are primarily used to semi-structure the data collection process. The collected data is then decomposed into an agent-based model, again, using the MAIA structures.
4.20 As Figure 1 shows, there is a cycle between the ethnographic research and the building of a MAIA model. Although semi-structuring data collection minimizes the need to redo interviews, it may still be required to collect further information for the model. This would especially hold for field observations and document collection.
4.21 Besides building the conceptual model, the ethnographic data is also used to perform theoretical analysis. Not only can this analysis be used to further enrich the model, specifically in the evaluative structure (see previous section), it is also used to draw conclusions. These conclusions can be used independently or in combination with the simulation results. Some sort of triangulation can thus be completed, comparing the social analysis with the dynamics generated by running the model. What may be an issue here, however, is that the same input data is used for both methods, so they are not completely independent. Discussion 5.1 Building an agent-based model requires both quantitative and qualitative data. Although much of the information can be represented in the form of numeric values, the actual context of the model which shows the order of the events, and how agents make decisions and interact, requires qualitative information. Ethnography can provide rich data for building agent-based models both at micro and macro levels. However, it needs structure and interpretation to be actually applicable to this simulation approach (Yang & Gilbert 2008). In this paper we presented MAIA as a tool to collect and structure ethnographic data for ABMS. The process of building an agent-based model for the horticulture sector helped us to identify several benefits of using this tool.
---
5.2
First, the MAIA framework ensures consistency and coherence between the features extracted from the ethnographic process. Since MAIA is constructed as software meta-model, its soundness, completeness and parsimony have been verified (Ghorbani 2013). Therefore, the modeller can be confident that the collected and structured data is by default consistent in the model.
---
5.3
Second, as Dey (2003) indicates, analysing qualitative data also involves an abstraction process which may not be a straightforward task given the immense amount of details provided by ethnography which mostly concerns individuals. Since MAIA is an abstract template or 'ontology' for a set of concepts, it proved to be highly instrumental for facilitating and documenting this abstraction process.
---
5.4
Third, another contribution of MAIA in making use of ethnographic data is that it helps to identify the normative aspects of the system. The insights people provide about their view of the world through interviews are not based on external reality but are culturally generated and emergent. With the ADICO statements in MAIA, the modeller can extract the norms and shared strategies from the interviews in order to add a cultural/institutional dimension to the simulation.
---
5.5
Fourth, an important contribution of using MAIA is that not only the collected ethnographic data can be used to build an agent-based model; the theoretical analysis performed on the data is also put to use. The theoretical ethnographic analysis helps define the variables that measure the outcomes of the simulation. These variables are covered in the evaluative structure of MAIA. Therefore, besides informing agent behaviour, the methodological process introduced in this paper can help measure the possible outcomes of interest, i.e., macro-level patterns for the simulation.
---
5.6
Fifth, when an ethnographic researcher uses MAIA, her activities become more structured and tractable. We anticipate this will facilitate the interpretation and discussion of field research, and lead to a growing body of empirically grounded information that can be re-used for modelling and research studies.
---
5.7
Finally, linking the body-of-knowledge of anthropology and agent-based modelling of social systems may be mutually beneficial. We believe, the proposed method supports non-computing anthropologists in building agent-based models in order to complement their research methods. To explore the feasibility of this claim, an anthropologist performed the whole process starting from the ethnographic fieldwork to the development of the conceptual model. We observed that MAIA can indeed bring ABMS within the reach of anthropologists who even have no familiarity with modelling.
5.8 Indeed, to build agent-based models from such data, a major difficulty is the step from a limited number of individuals interviewed to the creation of a whole society. The stories and decision-making are usually personal and related to personal incidents; it is hard to draw certain 'types' of agents from that, because those coincidental incidents in life have a large influence. While estimating the percentages of the type of people forming the society is hard, in the eventual ABM, these can become parameters for variation.
---
5.9
Finally, it is important to emphasize that the structuring of collected data although highly facilitated with MAIA, still depends on the creativity of the modeller. There are many choices and interpretations that the modeller has to make to transform qualitative data into an agent-based model. When MAIA is used, however, there will be both a unambiguous language to communicate about the decision taken, and a traceable track record of how the researcher arrived from empirical data to interpreted model results and model.
---
Conclusion
6.1 Managing and structuring data, especially qualitative, is a major challenge for agent-based modelling. This research presented a method to effectively use ethnographic data for building agent-based models.
---
6.2
We used the MAIA framework to semi-structure the data collection procedure and later on used the same framework to decompose the information and build a conceptual agent-based model. The conceptual model is then used to produce running simulations.
---
6.3
Although MAIA facilitated the structuring of qualitative information, another phase of data collection is required, namely one to complete the quantitative aspects of the simulation. This phase is not yet supported by the methodological process presented here. Therefore, the next step of this research is to extend the MAIA framework to support the quantitative data collection process.
Figure 2. The UML class diagram for the MAIA meta-model (Ghorbani et al. 2013) | 36,850 | 747 |
29185015af93b629a5fbc5fad0bb24b6c86c2d0a | Predictive Models of Maternal Harsh Parenting During COVID-19 in China, Italy, and Netherlands | 2,021 | [
"JournalArticle"
] | Background: The COVID-19 pandemic drastically impacted on family life and may have caused parental distress, which in turn may result in an overreliance on less effective parenting practices.The aim of the current study was to identify risk and protective factors associated with impaired parenting during the COVID-19 lockdown. Key factors predicting maternal harsh discipline were examined in China, Italy, and the Netherlands, using a cross-validation approach, with a particular focus on the role of allomaternal support from father and grandparents as a protective factor in predicting maternal harshness.The sample consisted of 900 Dutch, 641 Italian, and 922 Chinese mothers (age M = 36.74, SD = 5.58) who completed an online questionnaire during the lockdown. Results: Although marital conflict and psychopathology were shared risk factors predicting maternal harsh parenting in each of the three countries, cross-validation identified a unique risk factor model for each country. In the Netherlands and China, but not in Italy, work-related stressors were considered risk factors. In China, support from father and grandparents for mothers with a young child were protective factors.Our results indicate that the constellation of factors predicting maternal harshness during COVID-19 is not identical across countries, possibly due to cultural variations in support from fathers and grandparents. This information will be valuable for the identification of at-risk families during pandemics. Our findings show that shared childrearing can buffer against risks for harsh parenting during COVID-19. Hence, adopting approaches to build a pandemic-proof community of care may help at-risk parents during future pandemics. | INTRODUCTION
The COVID-19 pandemic drastically impacted on family life. Parents worried about their own and their families' health, job losses, and salary reductions, while keeping up their family life in social isolation. Moreover, because of (partial) school closures, families were suddenly faced with additional pressure of homeschooling their children. There may be considerable variability in how families deal with pandemic challenges and the extent to which they were impacted by COVID-19. For some families, the sequelae of the pandemic may lead to heightened psychological distress and, in turn, an overreliance on less effective parenting practices such as a harsh disciplinary style or even child abuse or neglect (1), with negative impact upon children's wellbeing. Other families, however, may manage relatively well. The current study therefore aims to identify risk and protective factors associated with impaired parenting during the lockdown amidst COVID-19. More specifically, we examined key family factors predicting maternal harsh discipline across three countries, China, Italy, and the Netherlands, using a cross validation modeling approach (2,3). We particularly focused on the role of support from father and grandparents as a protective factor facilitating mothers' adaptability and buffering the effects of pandemic-related distress on caregiving behaviors. Harsh discipline, characterized by parental attempts to control a child using verbal violence (e.g., screaming) or physical punishment (e.g., hitting) (4), can be considered child emotional or physical maltreatment (5,6). Given the long-term negative consequences of maltreatment for children's development (7) examining the predictive performance of factors contributing to harsh parenting is essential for identifying at-risk families and preventing detrimental effects on children during future pandemics.
---
Kinship Networks and Harsh Parenting
The traditional African proverb "It takes a village to raise a child" may express an underlying truth (8). Mothers, or fathers, do not rear children on their own, but childrearing is usually embedded in larger kinship networks (e.g., grandparents, relatives, neighbors) and communities (schools, daycare centers) that offer support with childcare and/or education. This shared child care appears crucial for parental well-being and optimal child development. For example, involvement of nonresidential grandparents decreases parental stress and promotes children's well-being by stimulating prosocial behaviors and academic engagement (9). Similarly, support from relatives, friends, or neighbors reduces parental stress and lowers risk for child abuse and neglect (10). However, during COVID-19, support outside the family unit has abruptly been lost due to social distancing, closures of schools and daycare centers, and other pandemic and lockdown restrictions. Parents suddenly needed to rely solely on each other, yet distress triggered by the pandemic may interfere with the ability to provide adequate partner support (11). These circumstances may increase risk for harsh parenting practices.
---
Pre-existing Vulnerabilities and Harsh Parenting
Families with pre-existing vulnerabilities may be particularly at risk for inadequate or harsh parenting during the pandemic. For example, economic hardship is an important factor contributing to risk for child abuse and neglect (6), but the level of risk that pandemic-related financial insecurities poses for parenting abilities likely depends on families' financial situation prior to the pandemic (11). Similarly, psychological distress induced by the pandemic may be particularly difficult to regulate for parents with pre-existing mental health problems, another well-known factor elevating risk for harsh parenting (6). Further, major life stressors, such as the COVID-19 pandemic, may lead to marital conflicts and dissolution or intimate partner violence (IPV) (11). The first studies on family functioning during COVID-19 report increased rates of IPV (12), which may spillover to and harm the child because violence is modeled as a way to deal with conflicts that may also emerge in the parent-child relationship (6). Lastly, environmental factors, such as overcrowded living conditions and lack of access to private outdoor space, may further elevate risk for abuse (13), in particular during lockdown amidst COVID-19 when families are required to stay home.
---
Protective Factors and Harsh Parenting
Protective factors may, however, buffer the negative effects of COVID-19 on parenting abilities. These protective factors may either lie at the level of the individual parent, such as good (pre-existing) mental and physical health, or may be located in the family composition. One potentially important factor buffering the impact of crises, such as COVID-19, on maternal caregiving is allomaternal care, that is, childcare by adults other than the biological mother including fathers, grandparents, and other group members. Evidence from studies with high-risk families underscores how much allomaternal support matters. For example, father support reduces the adverse long-term effects of maternal depression during a child's infancy on later child behavior problem (14), suggesting that father involvement may compensate for maternal stress. In contrast, in families where father involvement is low or father is absent, as in the case of single mothers, mothers are at increased risk for abusing or neglecting their children (15,16). Other family members may also offer allomaternal assistance, such as older siblings (17) and grandmothers (18). Research shows that the presence of a grandmother in the same household with a teenage mother increases the quality of mothering and, in turn, chances of a secure mother-infant attachment relationships (19). Similarly, having a grandmother at hand predicts improved health and cognition among low birthweight infants (20), although under adverse conditions, such as extreme poverty, presence of grandparents may reduce life expectancy of offspring because they use scarce resources (21). These findings are in line with the grandmother hypothesis (22), stating that extended human female postmenopausal lifespan is an evolutionary adaptation that allows grandmothers to provide allomaternal care to their grandchildren in order to increase their fitness. Based on the grandmother hypothesis, it could be expected that shared childrearing may function as a resilience buffer in times of adversity and may also exert protective effects on mothers' caregiving abilities in the times of pandemics.
---
Cultural Differences Across the Netherlands, Italy, and China
Although the cooperative nature of human childrearing is universal (23), it is influenced by cultural and economic factors (24). For instance, Western-European families are often only partly supported in child care by grandparents, but for example in low and middle-income countries grandparental involvement is much stronger (25). Moreover, the probability of grandparental co-residence with children and grandchildren is higher in nonwestern societies with traditions of filial piety (26). In China, co-residence with extended family, including grandparents, is common practice (27) and grandparents are often involved in full-time child care. In particular the grandmother forms an important child care provider for Chinese mothers who need to balance the competing demands of childcare and (full-time) work in the absence of adequate child care provisions (28). Chinese fathers also share care with mothers and are more likely than in the past to emotionally invest in their children because the single-child policy has weakened gender roles (29,30). In contemporary China, child rearing is therefore considered a joint mission of mothers, fathers, and grandparents who together form an intergenerational parenting coalition (27).
During COVID-19, this extended family may be a source of resilience as the unexpected burden of the pandemic is shared among more people. Indeed, in a previous study with the same sample, we found that support from grandparents during the lockdown was associated with less maternal mental health symptoms (31). From an evolutionary perspective, it has been argued that human childcare practices in the context of extended families enhances children's survival by sharing the costs and load of raising children (18). Exclusive maternal care has even been considered out of step with nature (18) because, according to calculations of evolutionary anthologists, human children consume more than 13 million calories until they reach adulthood (32), which is far more than a mother can provide. Contrasting with extended families in China, in most western societies, including Italy and the Netherlands, the nuclear family is the traditional family, consisting of parents and children, living apart from grandparents and other relatives, e.g., (33). This may be disadvantageous during the lockdown. Non-residential grandparents, among those most vulnerable to COVID-19, were kept at distance from children and grandchildren, which increased their chances of survival but posed a problem for working parents who had grandparental childcare support prior to the pandemic.
For mothers in nuclear families, father involvement in childcare may be an important resilience factor buffering the effects of the pandemic on maternal caregiving. Yet, father involvement varies across cultures and paternal behaviors should not be presumed to have similar influences on mothers' caregiving behaviors across different cultural groups. For example, Craig and Mullan (34) showed that mothers' and fathers' work arrangements only predicted equal distribution of childcare between parents in countries supporting equal gender divisions. In Italy, where gender inequality is high and the rate of female employment is amongst the lowest in Europe (35), fathers do not re-adjust for mothers' working hours (34). Italian fathers tend to stick to unequal shares of childcare, promoting Italian families to rely on additional sources of allomaternal support. Due to modestly available formal child care and a ubiquitous feeling of compliance, it is customary that Italian grandparents assist parents and take care of their grandchildren on a regular basis (36).
Contrasting with Italy, the Netherlands shows a lower prevalence of the male breadwinner family. Dutch mothers often switch to a part-time job while fathers keep working full-time after becoming parents (37). This is also known as the one-anda-half earner household (38). Although Dutch women still bear the largest part of the burden of household chores and child care activities in daily life (38), levels of gender equality are considered quite high (39). The Dutch formal child care system is used by a large proportion of parents (38,40). Nevertheless, many parents in the Netherlands prefer to combine formal child care with some kind of informal child care, the most prevalent form of the latter being non-residential grandparents taking care of their grandchildren (40). Co-residence with grandparents is, however, uncommon in the Netherlands and COVID-19 separated many Dutch children from their non-residential grandparents, thus lowering sources of allomaternal support.
In addition to cultural differences in family composition, culture may also shape parenting practices since cultural values and norms may affect attitudes about raising children, which may in turn influence parent-child interaction (41). It is therefore important to take into account the role cultural context (42), when examining parenting during the COVID-19 lockdown. More specifically, parents may acquire certain beliefs on disciplinary styles, such as corporal punishment, within a cultural context and harsh discipline may occur more often in cultures or countries where practice of violence is viewed acceptable or normative. For example, a cross-cultural study on parenting across six countries Lansford, Chang (43) showed that harsh parenting is most prevalent in countries where physical discipline is perceived normative by parents. However, other research shows that there are far more cultural similarities than differences in parenting practices and that differences among cultural groups disappear when socioeconomic status is controlled (44).
---
Aims and Hypothesis
In the current study we examined risk and protective factors predicting harsh parenting among mothers with children aged 1-10 years during the COVID-19 lockdown in China, Italy, and the Netherlands. Examining harsh parenting during the lockdown is important because expressions of violence in a family context has negative effects on children's development and psychosocial adjustment (45,46). Our study extends a previous study in which we examined maternal mental health during the lockdown, but did not examine harsh parenting (31). Initial findings of research on the impact of COVID-19 point to increases in harsh parenting, with pandemic-related distress as a mediator (47). However, social and cultural context may either accentuate or minimize the impact of individual-level and family-level factors predicting harsh parenting. Hence, the constellation of parent and family characteristics as predictors of maternal harshness may not be replicable across countries. In the current study, maternal harsh parenting will therefore be examined across cultures by applying a cross-validation approach (2) for selecting models predicting maternal harshness in each country. Crossvalidation allows accurate estimation of how a model would perform on other samples (3). In a predictive modeling context, cross-validation does not select the model predictors based on statistical significance, but based on their predictive performance. Predictive performance is especially important for the purpose of the current study, because in case of future pandemics involving lockdowns, identifying families at risk of harsh parenting or even child abuse is essential.
It can be expected that previously identified antecedents of child abuse and neglect, such as parental psychopathology, marital conflict, low socioeconomic status, low father involvement, a large number of children, and poor housing (6,15,16,48), also enhance risk for harsh caregiving in the time of COVID-19. However, in addition to these previously identified antecedents, risk factors more closely related to acute COVID-19-related stress, such as COVID-19 related concerns about health and work increase, may further elevate risk for maternal harshness, whereas allomaternal support may exert protective effects on mothers' caregiving abilities. Hence, our first hypothesis was that previously identified risk factors for child abuse and COVID-19 related stress about health and work would increase risk for harsh maternal caregiving, whereas involvement of father and (co-residential) grandparents would buffer against risk. Second, we hypothesized, in line with the grandmother hypothesis (22,49,50), that grandparental involvement would be particularly beneficial for mothers with young children who are still highly dependent on the physical and emotional availability of caregivers. Thirdly, we expected that high levels of allomaternal support, i.e., support from both fathers and grandparents, facilitate mothers' adaptability and mitigate the effects of pandemic-related distress on caregiving. Lastly, we hypothesize that mothers in the three countries may be differently impacted by the pandemic. This expecation was also based on our previous finding that grandparental support during the lockdown lowers risk for mental health symptoms for Chinese mothers, but not for Italian and Dutch mothers (31). Although child physical abuse is a global phenomenon, unaffected by cultural-geographical factors (51), factors predicting harsh parenting during COVID-19 may differ across countries due to cultural variations in allomaternal support. Thus, we tested the hypothesis that the constellation of factors contributing to maternal harsh parenting during COVID-19 is subject to influences of family composition and may therefore vary across countries.
---
METHODS
---
Participants and Design
Dutch, Chinese, and Italian parents aged 18 years or older with children between 1 and 10 years were invited to participate by completing an online survey. In each country, parents were recruited by contacting elementary schools. In the Netherlands and Italy, parents were also recruited by contacting day care centers using social media advertisements (facebook, linkedin, twitter). Dutch parents were also recruited by distributing the questionnaire among parents who were members of the Dutch I&O research panel (www.ioresearch.nl). The minimum sample size was 400 parents in each country, providing sufficient power to detect moderately sized correlation coefficients (power = 0.80, r = 0.20) between harsh parenting and each of the predictor variables, but we strived for larger sample sizes. Parents who completed the questionnaire but did not meet the inclusion criteria (e.g., they had only children older than 10 years, N = 8 Dutch parents, N = 47 Chinese parents) were excluded. The final sample consisted of 1,156 Dutch parents, 674 Italian parents, and 1,243 Chinese parents. Fathers were excluded from the analyses for the purpose of the current study, resulting in a sample of 900 Dutch, 641 Italian, and 922 Chinese mothers for this study. Characteristics of the Dutch, Chinese, and Italian samples are presented in Table 1. Permission for the study was obtained from the local ethics committees of the School of Social and Behavioral Sciences of Tilburg University, Department of Psychology of Padua University, and Peking University Medical Ethics Board. Participants gave informed consent and were given a chance at winning a gift voucher.
---
Procedure
Data was collected using Qualtrics in Italy and the Netherlands, and using a web-based platform (https://www.wjx.cn/app/ survey.aspx) in China. Timeframes for data collection were: April 17-May 10 2020 for the Netherlands, April 21-June 13 2020 for Italy, and April 21-April 28 2020 for China. During these timeframes, governmental pandemic measures in the three countries included: remote working, keeping social distance from others, and schools and daycare centers were closed. In each country, in particular older people were advised to keep distance. Dutch people were allowed to leave their home if they had no COVID-19 diagnosis or symptoms and if they had not been exposed to infected others. Also in Italy people were gradually allowed to leave their home during the period of data collection (after May 4). The Chinese data was collected in the aftermath of the COVID-19 peak, but pandemic restrictions were comparable to the Netherlands and Italy. Similar to Italy and the Netherlands, people worked remotely, were allowed to leave their home, but were advised to keep social distance. We focused on recruitment in the regions that were most affected by COVID-19, that is, Northern Brabant (the Netherlands), Lombardy (Italy), and Henan, Hubei, and Shenzhen city (China), although parents from others regions in Italy and the Netherlands were also allowed to participate.
---
Measurements
---
Parent-Child Conflict Tactics Scale
The Parent-Child Conflict Tactics Scale (CTSPC) (52) was administered in order to assess maternal harsh disciplinary style. The CTSPC measures psychological and physical maltreatment and neglect of children by parents, as well as sensitive modes of discipline. For the purpose of the current study, we focused on the subscales psychological aggression (five items) and physical assault subscales (four items). An example item of the psychological aggression scale is "I shouted, yelled, or screamed angrily at my child", while an example item of the physical assault scale is "I slapped my child on the hand, arm, or leg". One item of the original 5-item physical assault subscale was excluded in order to prevent feelings of discomfort in parents. Mothers rated how often they used the different types of disciplinary behavior in the past two weeks on a 6-point scale, ranging from never to ≥5 times). A harsh parenting score was calculated by summing the nine items of the psychological aggression and physical assault subscales. Confirmatory factor analyses for ordered categorical item scores indicated that a 1-factor harsh discipline model fitted the data (RMSEA (95% CI) = 0.067-0.08; CFI = 0.969; SRMR = 0.057). The estimated reliability was good (McDonald's Omega = 0.99).
---
Allomaternal Support
Participants were asked to indicate whether or not they received support in child care from residential or non-residential grandparents. In Italy and the Netherlands, very few mothers reported receiving support from residential grandparents (Italy: 3.0%, N = 19, the Netherlands: 1.1%, N = 10) whereas approximately half of the Chinese sample reported a cohabitating grandparent (China: 53.1%, N = 490). Despite governmental recommendations to keep safe distance from grandparents, some mothers reported child care by nonresidential grandparents (Italy: 15.3%, N = 98, the Netherlands: 8.3%, N = 75, China: 0.5%, N = 4). Since the number of parents receiving support for nonresidential grandparents was very low, we decided to combine support for residential and nonresidential grandparents. In addition, involvement of father in household management/tasks and child care was assessed by asking the degree of maternal and paternal contributions to 20 household chores or child care activities. Activities included: homeschooling, clearing the table, large purchases, loading dishwasher/washing dishes, grocery shopping, cooking, small purchases, paying bills, cleaning up house, chores in and around the house, making beds, washing and dressing up child, cleaning the house, bringing child to bed, soothing child at night, making list for grocery shopping, washing clothes, ironing, washing car, taking out trash. Mothers were asked to rate their own contribution and the contribution of their child's father to these tasks in the past week on a scale ranging from 1 (almost exclusively mother) to 5 (almost exclusively father). Cronbach's Alpha was 0.90. Mean scores were calculated, with higher scores representing greater involvement of father. The average of these 20 item scores was used as a measure of father involvement.
---
Work Changes and Stress
Participants reported on changes in their employment that occurred due to the COVID-19 outbreak, such as loss of hours or job or decreased job insecurity. Mothers reported on the following work changes: moved to remote working, loss of hours, decreased pay, loss of job, decreased job security, disruptions due to childcare challenges, increased hours, increased responsibilities, increased monitoring and reporting, loss of health insurance, reduced ability to afford childcare, reduced ability to afford rent/mortgage, having to fire or furlough employees, decrease in value of retirement, investments, or savings. A total score was calculated by summing reported negative changes. In addition, participants reported on the level of distress they experienced due to the employment and financial impacts of the COVID-19 outbreak on a Likert scale ranging from 1 (no distress) to 10 (severe distress). The correlation between work changes and work-related distress was r = 0.35, p < 0.001.
---
General Psychopathology
Mental health was measured with the Brief Symptom Inventory 18 (BSI-18, omitting suicidality), measuring somatization (six items), depression (five items), and anxiety (six items), and a subset of 10 questions of the posttraumatic stress disorder (PTSD) checklist for DSM-5. Because these four latent mental health constructs were highly correlated (range r 0.776-0.961), aggregate psychopathology scores were computed by averaging all 27 item scores. Confirmatory factor analysis for ordered categorical data supported this decision by indicating that one general psychopathology factor adequately explained the correlational structure of the four latent psychopathology factors (RMSEA = 0.06; CFI = 0.974; SRMR = 0.043).
In addition, health concerns specifically related to COVID-19 were measured. Parents rated the level of distress they experienced due to COVID-19 related symptoms or potential exposure they had or their family or friends had. A score representing general COVID-19-related health concerns was calculated by averaging the two items measuring concerns for self and family and friends. The correlation between health concerns for self and health concerns for others was r = 0.825 p = <0.001.
---
Statistical Analysis
All analyses were conducted using the freely available software R [version 4.0.2; (53)]. Means and standard deviations were computed for continuous and normally distributed characteristics, and median and range were used for nonnormally distributed continuous variables. Categorical characteristics were expressed in frequencies and percentages. For continuous characteristics, the differences between the three countries were tested using one-way analyses of variance and interpreted using the Eta squared effect size. Chi-square tests were used for categorical characteristics and interpreted using Cramer's V effect size). The 9-item harsh discipline scale was used as the primary outcome measure in all cross validation analyses. The R-package xvalglms (2) allowed for conducting linear regression analyses using K-fold cross validation. Cross validation allows for estimating how a model would perform on other samples. This out-of-sample predictive performance is more accurately determined by cross validation than by traditional model fit measures such as R-squared (3). One advantage of cross-validation is that it more accurately tests out-of-sample predictive performance than by traditional model fit measures such as R-squared. Other advantages of cross validation are that (1) it prevents overfitting the model to the idiosyncrasies of the data collected, (2) often violated regression model assumptions [e.g. linear relation between a predictor and the outcome; homoscedastic and normally distributed residuals; (2)] are no longer required, and (3) it does not rely on p-values to determine the significance of a predictor, thereby preventing the problems related to p-hacking [e.g., inflated false positive rates; (54)].Our cross validation analyses involved two steps. In the first step, ten folds and 200 repeats were used to determine which combination of the 15 predetermined effects showed the best predictive performance in each of the three countries. This project's open science framework page includes a list of the predetermined effects, as well as the R-scripts (https://osf.io/9w8td). The inclusion or exclusion of each of those 15 effects corresponds to a total of 2 15 = 32,768 different regression models. Given that interaction effects were investigated, incorrectly specified models were excluded (i.e., those including interaction effects without the corresponding main effects), resulting in a final amount of 13,311 regression models. For each country, each of those 13,311 models was fit to each of the 200 repeatedly drawn training datasets. In each repeat, the full data was split randomly into ten parts. One of those parts served as the training data, the remaining nine as the test data used to validate the model estimated on the training data. The predictive performance on these test datasets was evaluated in terms of the root mean square error of prediction (RMSE p ). For each country, the model that most often showed the lowest prediction error across the 200 repeats was considered to have the best predictive performance. In the second step of our analyses, the best fitting model of each of the three countries was validated on the data of the other two countries, in order to determine the cross-cultural validity of the factors predicting harsh discipline in each country. For each country's winning model, the importance of the predictors was evaluated based on standardized regression coefficients resulting from a robust regression analysis to handle the violation of the homoscedastic residuals assumption in standard OLS regression.
---
RESULTS
---
Descriptive Characteristics
Table 1 presents the characteristics of Chinese, Italian, and Dutch families during the COVID-19 pandemic, including age of the mother, marital status, and employment. Significant differences between countries were found for almost all characteristics, because the large sample size of the study makes these statistical tests sensitive to detect very small differences between countries. Effect sizes of between-country differences on socioeconomic/demographic variables (age youngest child, age mother, education, marital status, number of children, employment) were small. However, as expected, there were large differences between countries in childcare involvement of grandparents. In China, 53.6% of the mothers indicated that one or more grandparents provided support, whereas this percentage was considerably lower in both the Netherlands (9.4%) and Italy (18.3%). Figure 1 provides a visual representation of the differences between countries on the continuous characteristics listed in Table 1. See Supplementary Table 1 for additional information regarding quarantine situation and COVID-19 diagnoses among parents. Figure 2 shows for each country the distribution of the harsh discipline total scores. Harsh parenting differed significantly between the three countries: Dutch mothers used less harsh parenting than Chinese and Italian mothers. Supplementary Tables 3, 4 and 5 present the correlations between the two subscales of the CTSPC (psychological aggression and physical assault), childcare
---
Cross Validation
Table 2 shows for each country the top three regression models in terms of minimizing the prediction error (RMSE) in the cross validation analyses. The number of wins indicates the percentage of the 200 cross validation repeats a particular model showed the lowest prediction error (RMSE) of all 13,311 investigated models. The cross validation procedure identified a unique winning model for each of the three countries. In Italy, number of children, education, house with garden, general psychopathology, and marital conflict were important predictors.
In the Netherlands, the following predictors were found: number of children, work change, general psychopathology, marital conflict. In China, income, education, work stress, general psychopathology, marital conflict, father involvement and the interaction between grandparental involvement and age youngest child were important predictors (see Supplementary Table 2). Table 2 presents the standardized regression coefficients (β) and Wald test p-values according to three robust regression analyses, including for each country the predictors of the winning model identified through cross validation. In all countries, marital conflict and psychopathology showed a substantial positive association with harsh parenting, although there were considerable between-country differences in the identified predictors. In line with our expectations, harsh parenting was partly explained by the interaction between childcare offered by grandparents and age of the youngest child. Figure 3 illustrates this interaction effect, showing that grandparental childcare was associated with less harsh parenting by Chinese mothers, especially when the youngest children were still young.
To determine the cross-cultural predictive validity of each country's winning model, a second series of cross validation analyses were conducted, evaluating the predictive performance of each winning model when predicting harsh parenting in the other two countries. Figure 4 visualizes the resulting prediction error distributions for each of the fitted top models and each of the three datasets. Unsurprisingly, for each dataset, the country's own best model showed the lowest prediction error in 100% of the cross validation repeats. The distributions in the bottom row of Figure 4 show that the Dutch and Italian models perform poorly in predicting harsh parenting in China. Interestingly, the overlapping distributions of the Dutch and Italian models in the Italian data suggests that the Dutch predictors can reasonably well predict harsh care of Italian mothers.
---
DISCUSSION
In the current study we examined risk and protective factors predicting maternal harsh parenting during the COVID-19 lockdown in China, Italy, and the Netherlands. We applied a cross-validation approach (2) for selecting which combination of 15 predetermined effects showed the best predictive performance in each country. Predictive modeling pointed to marital conflict and maternal psychopathology as shared risk factors predicting harsh parenting in each of the three countries. Despite these common factors, cross-validation identified a unique winning model for each of the three countries, thus indicating that the winning models with the best predictive performance differed between countries. In the Netherlands, work changes and number of children in the home predicted harsh parenting in addition to psychopathology and marital conflict, whereas in Italy, number of children, education, and house with garden were considered important predictors of maternal harsh parenting. In contrast, harsh parenting used by Chinese mothers was best predicted by education, income, and work-related stress of the mother. In addition, father involvement and grandparental involvement for mothers with a young child were considered important protective factors lowering risk for harsh parenting in China. Our findings extend our previous study in which we examined maternal mental health during the lockdown in China, Italy, and the Netherlands, but did not assess harsh parenting (31). Results indicate that, in addition to marital conflict and maternal psychopathology as shared risk factors, models predicting harsh parenting during COVID-19 include distinct risk factors that are not replicated across cultures, possibly due to cultural variations in family composition and allomaternal support. Hence, although harsh parenting is a global phenomenon (51), the constellation of factors predicting maternal harshness during COVID-19 is not identical.
First results of COVID-19 studies indicate that the pandemic drastically impacted on family life and that COVID-19 related distress can increases harsh parenting practices [e.g., (47)]. Our cross-validation results extend results of initial studies by indicating that there were considerable between-country differences in the identified predictors of maternal harshness. In our cross-validation approach, model predictors were not selected based on statistical significance, but based on their performance in predicting harsh parenting in each country. This predictive modeling context contrasts with the traditional explanatory data analysis approach used by previous COVID-19 studies and enables the identification of a risk factor model that most accurately predicts harsh care during the lockdown in each of the three countries. Our finding that each country has a unique constellation of factors predicting harsh parenting indicates that we should be careful with generalizing findings on disrupted parenting during the lockdown to other countries. The predictive performance of models predicting harsh care during COVID-19 is not the same across countries, implying that there is no universal risk factor model that can be used for the identification of at-risk families across countries.
In line with our expectations, we found that grandparental involvement lowered the risk for harsh parenting among Chinese mothers. Interestingly, grandparent involvement interacted with age of the child. The grandparent effect was particularly pronounced for Chinese mothers with younger children, which is in line with previous studies showing that grandparental involvement is particularly advantageous for children in the post weaning phase. For example, (50) showed a positive grandmother effect on the nutritional status of Aka children in Congo, with their effect most evident during the critical 9-36 months postweaning phase. This post-weaning phase may be a critical period demanding high levels of allomaternal support because maternal caregiving decreases while toddlers are still heavily dependent on care. Moreover, toddlerhood is also the period characterized by increases in parent-child conflict related to the child's burgeoning autonomy and parental disciplinary strategies (55), thereby increasing caregiving load for parents. According to the grandmother hypothesis (22), the prolonged post-reproductive lifespan of grandmothers is the result of evolution favoring post-reproductive individuals their fitness through assisting their own offspring to reproduce successfully (49). Our results add to these findings and suggest that, under the adverse COVID-19 conditions, grandparents indirectly promote children's wellbeing by exerting protective effects on the rearing environment.
Grandparental involvement was, however, only an important predictor in the top winning model predicting maternal harshness in China, but not in the Netherlands and Italy. This is consistent with our previous study with the same sample in which we found that grandparental support only lowers mental health problems in Chinese mothers (31). Hence, no grandparent effect was observed in Italy and the Netherlands, possibly because in these countries the nuclear family is the most common family constellation, and nonresidential grandparents were kept at a distance from parents and grandchildren during the lockdown. Another remarkable difference between the Dutch and Italian vs. the Chinese models, potentially related to cultural variations in family structure, was that the number of children contributed to harsh care in the Netherlands and Italy, whereas this factor was considered unimportant in the Chinese model. Although previous research has identified a large number of children in the home as a risk factor for child maltreatment (48), these studies were predominately conducted in Western societies with nuclear families. In extended families, grandparents or other kin may assist with child care in the home environment, thus sharing the caregiving load and allowing parents to have more children without increasing the risk for child maltreatment (49). In China, where the extended family is considered traditional, a large number of children may therefore be a less important predictor for maltreatment. These results suggest that the antecedents of harsh parenting during the lockdown may be different across countries due to cultural variations in family composition. This interpretation is supported by our observation that Dutch risk factors predicted harsh care of Italian mothers reasonably well, possibly because in both countries the nuclear family is most prevalent, whereas Dutch and Italian models performed poorly in predicting harsh parenting in China. It should be noted that many countries are multicultural and include multiple ethnic groups. Hence, our findings do not only indicate that there is no universal risk factor model that can be used for the identification of at-risk families, but also warrant caution against accepting one model for COVID-19-related risk factors within one country. Cultural variations in family composition may accentuate or minimize the importance of risk and protective factors, possibly leading to between-and within country differences in the constellation of risk factor models.
In addition to the potential role of family composition, employment rates of mothers may also have resulted in a differential constellation of predictors across the three countries. The employment rate of the Chinese mothers sample was very high in the current sample (93.6% of mothers), which matches well with the above world-average record of female labor force participation in China (56). Moreover, the vast majority of women are involved in full-time employment as part-time working has not yet been initiated/stimulated in China (57). As a consequence, the need of allomaternal support may be high in China: Chinese mothers may need support with childcare from either grandparents or father in order to meet the demands from work (58). This may explain why Chinese mothers who benefitted from support from highly involved fathers showed lower levels of harsh parenting, whereas father involvement was not considered an important predictor in Italy and the Netherlands. In line with this explanation, we found that father involvement was higher in China compared to Italy and the Netherlands. Another unexpected finding was that work-related stress or work-related changes predicted harsh parenting in the Netherlands and China, but not in Italy. In Italy, the male breadwinner model is most prevalent and female employment rates are rather low (59). Although work-related changes and stress reported by Italian mothers was quite high and the majority of mothers were employed, her partner's financial and job security may have lowered maternal stress regarding financial resources and buffered the effect of mothers' work stress on parenting abilities.
During COVID-19, in particular older adults were advised to keep social distance and (non-residential) grandparents who were involved in child care prior to the pandemic suddenly refrained from babysitting. Although this may have been a necessary precaution in order to avoid exposure to the virus, loss of allomaternal support from grandparents may have had a negative impact on parents (31) as well as children. The unexpected loss of grandparental support during the lockdown may have increased parenting stress, which may in turn leads to an overreliance on less effective disciplinary strategies, such as harsh discipline. Although grandparental involvement in child care exerts positive influences on children's health and well-being (9), the role of grandparents in caregiving is still sidelined in policy decisions. Research on caregiving also focused mainly on the mother as the primary caregiver and neglected the role of other caregivers such as grandparents. Our finding that high levels of allomaternal support from grandparent and father reduces the risk for harsh maternal caregiving during the lockdown in China underscores the importance of shared care, and may inform policies regarding child care during future pandemics. Adopting approaches to build a pandemic-proof community of care and strengthening networks of support inside and outside the family unit may help at-risk parents during future pandemics. Some strengths and limitations should be noted. One strength of the study is that we examined the cross-cultural validity of factors predicting harsh care using large samples from three different countries. Examining parenting during the pandemic across countries is important because COVID-19 is a global crisis and understanding factors predicting harsh care will help identifying at-risk families during future pandemics. Yet, it is unclear whether results from individual countries are replicable across countries. Another strength is the use of cross-validation, which enabled us to identify those predictors that best predict maternal harshness in our data, but also perform well in predicting harsh parenting in various random subsets of the data. Cross-validation therefore revealed models that can be used to predict harsh parenting during future pandemics. This contrasts with standard statistical analyses that risk overfitting their regression models, resulting in models that fit the initial data very well, but are difficult to replicate in future research.
Another strength is that allomaternal support from father was measured with a 20-item task division questionnaire, enabling us to study how degree of paternal involvement impacts on maternal caregiving. However, it should be noted that grandparental involvement was measured dichotomously and we were not able to differentiate between maternal and paternal grandparents. Effects of grandparental involvement may be even more pronounced with continuous measures with more power. A second limitation is that some variables did not have sufficient within-country variability to test whether they contributed to harsh care. For example, in the Netherlands almost all parents reported living in a house with a private garden. In contrast with our expectation that lower quality housing would predict harsh care, living in a house with a garden was related to higher levels of harsh parenting in Italy. This effect, however, only approached significance in the robust regression analysis, was absent in China, and may therefore be the result of confounding factors that we did not control for in the current study. In addition, it should be noted that the Chinese, Italian, and Dutch samples showed differences in sociodemographic variables, such as age and employment. However, due to the large sample size, statistical tests were sensitive to detect very small differences between countries. It is not very likely that this has influenced the results, as effect sizes were small and we controlled for sociodemographic variables in all analyses. The analyses also mainly focused on predictive models in which multivariate associations are more important than mean level differences between the countries. Furthermore, Italy was affected to a larger extent by COVID-19 than the Netherlands and China. During data collection, China was in the aftermath of COVID-19, whereas the number of infections were still high in Italy and the Netherlands. Pandemic restrictions concerning closures of schools and day care centers, social distancing, and remote working were, however, the same across countries. Moreover, our results show that COVID-19-related health concerns did not contribute to the prediction of harsh parenting. It is therefore unlikely that the constellation of factors predicting harsh care differed across countries due to differences in COVID-19 severity. Furthermore, it should be noted that the threshold parameters in the harsh parenting factor model for ordinal items were not invariant across countries, implying that factors other than harsh parenting were influencing the differences between countries on some harsh parenting item scores. The deviation from invariance however seemed small and invariance did hold for factor loadings. This analysis suggests that mean differences between countries on the harsh parenting scale should be interpreted with care. Lastly, we examined only maternal harshness and excluded fathers from the current analyses although we did examine paternal involvement in child care. Future COVID-19 studies should involve fathers. Moreover, future research should also examine the impact of lockdowns in families at risk for maltreatment. Allomaternal support may be particularly important in at-risk families. For example, a high-quality relationship with involved grandparents may play a buffering role for children in at-risk families.
In conclusion, during COVID-19 parents were presented with unprecedented challenges. For some families, pandemic-related distress may interferes with adequate parenting. Examining risk and protective factors for impaired parenting is therefore important and will help identifying at-risk families during COVID-19 and future pandemics. Our study showed that the constellation of factors predicting maternal harsh parenting during the COVID-19 lockdown is not identical across countries. Although marital conflict and maternal psychopathology are shared risk factors, the predictive performance of models predicting harsh parenting during COVID-19 differed across countries. Hence, the constellation of factors predicting maternal harshness during COVID-19 is not universal. This information will be valuable for the identification of at-risk families during future pandemics. Importantly, our results indicate that shared childrearing can buffer against risks for harsh parenting during adverse circumstances such as COVID-19, thus motivating the development of pandemic-proof support approaches, customized for individual countries, to assist parents with childcare and reduce parenting stress during future pandemics. During the lockdown, in the absence of any childcare support from community, the concept "It takes a village to raise a child" (8) may have had more meaning than ever. Mothers do not rear children on their own and allomaternal support from fathers, grandparents, and the community may be needed to establish resilience at a family level. Hence, building a pandemic-proof community of care can be leveraged in efforts to prevent harsh caregiving practices and their detrimental effects on children's well-being during future pandemics.
---
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
---
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by School of Social and Behavioral Sciences of Tilburg University, Department of Psychology of Padua University, Peking University Medical Ethics Board. The patients/participants provided their written informed consent to participate in this study.
---
AUTHOR CONTRIBUTIONS
MR: conceptualization, investigation, validation, data curation, writing-original draft, funding acquisition, supervision, project administration, and resources. PL: software, methodology, validation, data curation, formal analysis, visualization, and writing-original draft. MV-V: investigation, writing-review, and editing. MB-K and MvIJ: methodology, supervision, writing-review, and editing. PDC and JG: investigation, data curation, writing-review editing, resources, and funding acquisition. All authors contributed to the article and approved the submitted version.
---
SUPPLEMENTARY MATERIAL
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyt.
---
2021.722453/full#supplementary-material
---
Conflict of Interest:
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's Note: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 51,084 | 1,726 |
91ea3553a6d92b7ee87e9a37b2c8021bdd335e58 | Neighborhood Reputation and Resident Sentiment in the Wake of the Las Vegas Foreclosure Crisis | 2,014 | [
"JournalArticle",
"Review"
] | This study examines how two major components of a neighborhood's reputation-perceived disorder and collective efficacy-shape individuals' sentiments toward their neighborhoods during the foreclosure crisis triggered by the Great Recession. Of central interest are whether neighborhood reputations are durable in the face of a crisis (neighborhood resiliency hypothesis) or whether neighborhood reputations wane during times of duress (foreclosure crisis hypothesis). Geo-coded individual-level data from the Las Vegas Metropolitan Area Social Survey merged with data on census tract foreclosure rates are used to address this question. The results provide qualified support for both perspectives. In support of the neighborhood resiliency hypothesis, collective efficacy is positively associated with how residents feel about the quality of their neighborhoods, and this relationship is unaltered by foreclosure rates. In support of the foreclosure crisis hypothesis, foreclosure rates mediate the effects of neighborhood disorder on resident sentiment. The implications of these findings for community resiliency are discussed. | Introduction
Neighborhood reputations are based on common perceptions of neighborhood disorder and common perceptions about a neighborhood's ability to cope with disorder (Sampson 2012). Once recognized, neighborhood reputations shape individual sentiments about neighborhood quality, these sentiments then guide residential mobility decisions (e.g., Lee, Oropesa, and Kanan 1994;Speare 1974), reinforce stigmas in urban communities and perpetuate urban spatial inequalities, influence growth machine politics (Baldassare and Protash. 1982;Temkin and Rohe 1996), and potentially affect the resiliency of a community in the wake of catastrophe (e.g., Hartigan 2009). Numerous studies examine the determinants that influence individual sentiments regarding neighborhood quality and residential satisfaction (e.g., Amerigo and Aragones 1997;Dassopoulos, Batson, Futrell, and Brents 2012;Galster and Hesser 1981;Grogan-Kaylor et al. 2006;Hipp 2009;Lovejoy, Handy, and Mokhtarian, 2010;Parkes, Kearns, and Atkinson 2002), but research on the dynamic processes that reinforce or alter residential sentiments during a crisis period are largely absent from the literature.
This article contributes to an emerging area of urban community and disaster research by advancing a thesis that helps explain how neighborhood reputations function during crisis periods, when residents are forced to reassess the correspondence between objective circumstances and their residential sentiments. During times of neighborhood crisis-caused for example by natural disasters or sharp economic downturns-neighborhood reputations are more likely to be relied upon to guide the thoughts and actions of residents, and in the process, individual sentiments about their neighborhoods are apt to be altered in more favorable or less favorable ways as residents actively evaluate whether the purported reputation is living up to expectations. To begin to examine this premise empirically, our study uses survey-based data during the most recent housing foreclosure crisis to analyze how objective neighborhood circumstances, together with measures of neighborhood reputation, influence individual assessments of the quality of their neighborhood.
The Great Recession, which officially began in December 2007 (Muro et al.2009), triggered a housing foreclosure crisis throughout the US. For this study, we focus our examination of the relationship between housing foreclosure rates, neighborhood reputations, and resident sentiment on a strategic location, Las Vegas, Nevada. Following nearly 20 years of the nation's most rapid population growth and urban sprawl (CensusScope 2000), Las Vegas was one of the most heavily impacted metropolitan areas with some of the highest unemployment rates and home foreclosures in the nation (Bureau of Labor Statistics 2011; Center for Business and Economic Research 2011). Yet, Las Vegas is an advantageous area to study the effects neighborhood reputation on the making of resident sentiment, not only because it is an especially hard hit area, but because many newly built master-planned communities throughout Las Vegas have untested and potentially precarious neighborhood reputations (e.g., Knox 2008). In the wake of a deep economic recession, as residents face difficult decisions about their homes and neighborhoods, understanding the how neighborhood reputations shape residential satisfaction will open new ways of thinking about what manifests neighborhood resiliency and neighborhood change.
---
Boom and Bust: Las Vegas and the Foreclosure Crisis
The Las Vegas metropolitan area led the nation in population growth during the 1990s at 66.3%, almost doubling the rate of population growth of second ranked Arizona (CensusScope, 2000). Population growth in the Las Vegas metropolitan region continued apace in the 2000s with roughly half a million people arriving between 2000 and 2007. In this context of population growth, transiency was also high. In 2000, Nevada ranked highest among all states in residential mobility, where 25% of the population had moved from another state to Nevada within the past five years. Between 2000 and 2004, Nevada had the highest domestic annual rate of net migration in the country (Perry 2006). As a result of such rapid population growth and the attendant economic boom, the Las Vegas housing market flourished between 1990 and 2006. With approximately 6,000 newcomers per month arriving in Las Vegas at the height of the boom, home prices reached all-time highs in 2006, as many residents moved into newly developed master-planned communities equipped with additional amenities and homeowners associations. The average median price of a singlefamily home was $349,500 in January of 2007. Just four years later, following the economic bust and housing crisis, the median price of single-family homes in January 2011 was $132,000 -an astonishing 62% decline (Greater Las Vegas Realtors Association, 2007, 2011). This is the largest decline of any metropolitan area in the United States (Community Resources Management Division 2010).
Ultimately, problems with subprime lending began to emerge in urban areas that had large racial and ethnic concentrations, mid and low-level credit scores, new housing construction, and high unemployment rates (Rugh and Massey 2010;Mayer and Pence 2008). For Las Vegas, it was a booming housing market, relaxed lending standards, low short-term interest rates, and irrational exuberance about housing prices that contributed to rapid rates of home value appreciation and concentrations of subprime lending (Muro 2011;Mayer and Pence 2008). Recent scholarship has also identified the function of metropolitan residential segregation and racial and ethnic targeting of subprime lending as a primary contributor to the housing crisis (Hyra, Squires, Renner, and Kirk 2013). However, even with a large and growing Hispanic population, Las Vegas ranks relatively low on both black/white and Hispanic/white segregation levels (Frey 2010) -suggesting that the bustling housing market was the most likely driver of subprime lending and the housing collapse in Las Vegas.
With the largest concentration of subprime mortgage originations in the country (Mayer and Pence 2008), the Las Vegas housing market was a ticking time bomb for a housing bust. Subprime mortgage products were designed to provide home ownership opportunities to the most credit-vulnerable buyers, including those with no established credit history, little documentation of income, and/or those with smaller down payments. In addition to subprime lending, mortgage companies also made it easier for current homeowners to refinance loans and withdraw cash from houses that had appreciated in value (Mayer and Pence 2008). As a result, since 2007, approximately 70,000 housing units have been foreclosed upon with nearly 6,000 new foreclosures occurring every quarter (Community Resources Management Division 2010). Up until 2006, Nevada had a very low loan delinquency rate, particularly among subprime borrowers. This was partly because borrowers in the robust Nevada housing market could often avoid foreclosure by quickly selling their homes to eager buyers (Immergluck 2010). However, between 2007 and 2010 the foreclosure rate in Nevada increased by about 3 percentage points a year (Community Resources Management Division 2010). Such rapid and chaotic economic stress raises questions about the changing quality of neighborhood life for Las Vegas residents in this recessionary climate.
---
Neighborhood Reputations: Disorder and Collective Efficacy
Neighborhoods are often the environment wherein residents develop identities, forge relationships with peers, and create meaning and coherence in their lives. A neighborhood's reputation-shared beliefs among residents about the positive or negative qualities of a residential area-can influence people's views about themselves and the broader community. Neighborhoods with positive reputations are vital to the sustainability of healthy cities. When residents feel a sense of pride and satisfaction with their neighborhoods, they report a greater sense of attachment to the local community, higher overall life satisfaction, better mental and physical health, greater political participation, and are more likely to invest time and money in maintaining that positive image of the community (Adams, 1992;Hays & Kogl 2007, Sampson, Morenoff & Gannon-Rowley, 2002;Sirgy & Cornwell, 2002). Consequently, when residents are dissatisfied with their neighborhoods, they report a lower quality of life, are less invested in the community, and are more likely to engage in outmigration, which hinders long-term stability and reduces the capacity of a neighborhood to be resilient when challenges arise (Bolan, 1997;Oh, 2003;Sampson, 2003).
Residents' shared perceptions about various neighborhood qualities-e.g., convenient location and access to good schools-affect a neighborhood's reputation, but there are two essential neighborhood characteristics in particular that form the foundation of any neighborhood reputation. The first is whether residents jointly feel physical disorder is problematic for the neighborhood (e.g., abandoned property, broken windows, crime, etc.), and the second are shared expectations of residents in the collective ability of the neighborhood to address problematic issues (Sampson 2012). Through the lens of social disorganization theory, researchers have long studied the effects of neighborhood structural characteristics and physical signs of disorder on crime rates (Hipp 2010;Kurbin and Wetizer 2003;Sampson and Groves 1989), but an important distinction is warranted between objective observations of physical disorder (i.e., whether or not there is graffiti on the buildings and trash and litter on the streets) and people's stated sentiments about whether those conditions are problematic. The latter, people's shared evaluation of the problem, constitutes an important aspect of a neighborhood's reputation.
According to Robert Sampson's recent work on the stability and change of Chicago neighborhoods, "perceptions of disorder" are what "molds reputations, reinforces stigma, and influences the future trajectory of an area" (2012:123; also see Hunter 1974:93). Perceived neighborhood disorder, independent from actual objective measures of disorder, greatly affects the character of a neighborhood over time. Sampson (2012:144-145) finds in predicting future neighborhood conditions (e.g., poverty levels, crime rates, and outmigration), perceived neighborhood disorder is at least as strong a predictor as prior (i.e., lagged) neighborhood conditions. In the case of crime, prior perceptions of disorder are actually a much stronger predictor of future neighborhood crime rates than prior levels of crime. Adams (1992) also finds that residents' perceptions of crime and disorder have greater influences on neighborhood satisfaction than the actual existence of such crime and disorder.
The second aspect of a neighborhood's reputation is collective efficacy. Collective efficacy is "the linkage of cohesion and mutual trust among residents with shared expectations for intervening in support of neighborhood social control" (Sampson 2012: 127). Neighborhood cohesion among residents is believed to be a local resource for organizing around problems when they occur (Morenoff, Sampson, and Raudenbush 2001;Kubrin and Weitzer 2003;Larsen et al. 2004). Prior work has shown, like perceived neighborhood disorder, that perceived social trust and neighboring is meaningful to residents in their assessments of neighborhood quality (Grogan-Kaylor et al. 2006;Parkes et al. 2002). Neighboring fosters mutual support and trust among neighborhood residents (Sampson et al. 1989), and forming social ties helps foster attachments to an area (Austin and Baba 1990;Hipp and Perrin 2006;Kasarda and Janowitz 1974;Parkes et al. 2002;Sampson 1988Sampson , 1991)). Neighborliness reflects attachment through various activities that range from helping a neighbor in need to organizing to address a shared neighborhood problem (Woldoff 2002). As residents participate in neighborhood activities, they develop a shared sense of community and develop positive communal feelings (Ahlbrandt, 1984;Guest & Lee, 1983;Hunter and Suttles, 1972;Kasarda and Janowitz, 1974;Riger and Lavrakas, 1981).
Metropolitan context has implications for neighborhood reputations. Much of the research on neighborhood disorder and collective efficacy has taken place in Chicago, a city with many longstanding and historic neighborhoods. But, Las Vegas is a different kind of metropolitan area with many newly built "master-planned communities" (MPCs). These MPCs typically have homeowner's associations (HOAs) and additional amenities that are not commonly associated with neighborhoods in cities like Chicago. These newer MPCs are also less likely to have firmly entrenched reputations, and this will likely increase the variability in how residents respond to a crisis. Although new, MPCs in Las Vegas are certainly not without reputations. Many MPCs are actually provided simulacra-based reputations of community life through marketing strategies before any homes are even sold. This is because neighborhood qualities that are associated with communal bonds and collective efficacy have not been lost on the developers of contemporary master-planned communities. Today, developers of MPCs seek to enhance the marketability of their properties by providing amenities and design features that are intended to provide buyers with "a sense of community." Knox (2008:99) keenly recognizes as a product-branding process where developers synthetically attempt to instill upon a neighborhood a positive community-orientated reputation in order to sell buyers, not only on the quality of the homes, but on the quality of the entire neighborhood (also see Freie 1998). HOAs are also popular with these MPCs because the fees they solicit, and the rules they enforce, are meant to ensure a degree of consistency in the quality of the neighborhood brand.
The high rate of urban development prior to the Great Recession, the magnitude of the foreclosure crisis in the Las Vegas area, and the unique characteristics of MPCs make Las Vegas an advantageous place to study the making of residential sentiments for several reasons. First, the making of residents' sentiment in an unsettled period is important because these sentiments will likely facilitate neighborhood resiliency or neighborhood change during the recovery period. Second, given the highly volatile conditions in Las Vegas, our ability to discern the effects of objective neighborhood circumstances (like foreclosure rates) on subjective residential sentiments is enhanced. In other words, the objective reality of the crisis is likely to be physically more salient in Las Vegas than elsewhere making the effects more visible. Third, people's preconceived ideas about their neighborhoods are more likely to be challenged and subjected to dissonance because of the relative newness of many Las Vegas neighborhoods and their relatively unproven statuses. As alluded to above, the stability of a neighborhood's reputation typically exerts an inertia-type effect on individual sentiments during settled periods, but when crises strike, newer and older neighborhoods alike, have their reputations tested. We elaborate on this dynamic below. Fourth, homeowners associations common among MPCs are likely to act as intermediate institutions when crises strike. That is, HOAs may take steps to protect property values in ways that bolsters resident sentiments toward their neighborhoods, or conversely, the powerlessness of HOAs to deflect the foreclosure crisis could create an even greater disjuncture in expectations that further erodes resident sentiment. The uniqueness of Las Vegas makes it possible to more clearly observe these key dynamics in action.
---
Neighborhood Reputations during a Crisis
High foreclosure rates and the accumulation of real estate owned properties (REOs) have detrimental effects on neighborhoods (Apgar and Duda 2005;Immergluck and Smith 2006;Schuetz, Been, and Ellen 2008). In many neighborhoods, foreclosed homes are boarded up and vacant with unkempt yards and real-estate signage to indicate the neighborhood's diminished status. As a result, these properties create opportunities for criminal activity, discourage remaining residents from investing in their properties, potentially damage neighborhood social capital, and ultimately lower a neighborhood's perceived quality (Leonard and Murdoch 2009). These spillover effects result in neighborhood property devaluation as foreclosed homes typically sell at much lower prices and appreciate much more slowly than traditionally sold homes (Forgey, Rutherford, and VanBuskirk 1994;Pennington-Cross 2006). Based on data collected on foreclosures and single-family property transactions during the late-1990s, Immergluck and Smith (2005) estimated that each foreclosure within a city block of a single-family home resulted in a 0.9%-1.4% decline in that property's housing value. Ordinarily foreclosures may pose a serious threat to neighborhood stability and community well-being, and during the Great Recession unprecedented levels of housing foreclosures have become an objective symbol of genuine neighborhood crisis.
Despite the potential effects of housing foreclosures on assessments of neighborhood quality and the remaking of a residential area's reputation, there is little known about how a metropolitan-wide foreclosure crisis affects individuals' perceptions of their neighborhoods. As with high levels of perceived neighborhood disorder and low levels of perceived collective efficacy, we can reasonably expect high levels of foreclosures will be negatively associated with individuals' assessments of their neighborhoods. Yet, new realities and new ways of life emerge during unsettled periods, and these changes can challenge prior views and perceptions (e.g., Swindler 1986; Elder 1974).
To more fully understand the potential for change during these unsettled times, it is important to focus on how objective neighborhood circumstances, like foreclosure rates, may alter the relationship between a neighborhood's reputation and individual sentiments. Neighborhood reputations are generally stable during non-crisis periods, and are highly predictive of future neighborhood change, even more highly predictive than objective measures of neighborhood conditions (as reported above). But, importantly, during a crisis period when objective neighborhood circumstances cannot be easily ignored, the salience of a neighborhood reputation might weaken and come to matter less in shaping people's perceptions. This could be especially true in Las Vegas where the reputations of many new MPCs are untested. From this perspective emerges the foreclosure crisis hypothesis: Housing foreclosures will significantly mediate the relationship between neighborhood reputation (measured via collective efficacy and neighborhood disorder) and (a) individual assessments of neighborhood quality and (b) individual satisfaction with neighborhood property values. Thus, the effects of the crisis will have more influence on the sentiments of residents than perceived neighborhood reputations.
Objective circumstances may carry greater significance during a crisis because residents are forced to evaluate the correspondence between the objective situation and what they thought they knew about their homes, investments, and neighbors. However, disaster research reminds us time and again that individuals, families, neighborhoods, and communities are quite resilient when crises strike. It is common, for example, for areas affected by natural disasters to rebound within a few years to achieve a full functional recovery in terms of returning to, or in some cases exceeding, pre-disaster levels of population, housing, and economic vitality (Cochrane 1975;Friesema et al. 1979;Haas et al. 1977;Pais and Elliott 2008;Wright et al. 1979). A surprisingly unexplored factor that is potentially a major facilitator of resiliency is a neighborhood's reputation, especially collective efficacy as people are much more likely to need to rely on others during a crisis. Positive neighborhood reputations might ward against high foreclosure rates in the first place, or as a crisis unfolds, residents may filter the situation through their commonly shared beliefs about their community. Relying on preconceived beliefs for guidance during a crisis may produce the kinds of behaviors and outcomes consistent with the neighborhood's reputation. From this perspective, families and neighborhoods are more or less resilient because individuals respond to crises in ways that create a correspondence between reputation and reality. In support of this perspective, emerges the neighborhood resiliency hypothesis: Neighborhood reputations (i.e., collective efficacy and neighborhood disorder) will significantly mediate the relationship between neighborhood foreclosure rates and (a) individual assessments of neighborhood quality and (b) individual satisfaction with neighborhood home values. Thus, neighborhood reputations will have more influence on the sentiments of residents than housing foreclosures.
The evaluation of the foreclosure crisis hypothesis and neighborhood resiliency hypothesis is an important first step toward a more comprehensive understanding of the reciprocal connection between disasters and neighborhood reputations: Disasters have the power to fundamentally alter neighborhood reputations through the collective changes of individual sentiments, and yet, existing neighborhood reputations are potentially able to mitigate the effects of disasters on individuals and families. Ultimately, individual sentiments regarding their neighborhoods are the intervening link between disaster and changes to neighborhood status. Although we are unable to fully capture the entire reciprocal cycle-from existing neighborhood reputation through the crisis period to the altered neighborhood reputationwe do focus keenly on the linchpin in the process, individual sentiments regarding their neighborhoods.
---
Data and Methods
---
Study Area
The data for this study come from the Las Vegas Metropolitan Areas Social Survey (LVMASS). LVMASS provides individual-level data gathered from respondents living in 22 neighborhoods in the Las Vegas metropolitan area of Clark County, Nevada in 2009. Clark County has a population of roughly 1.95 million people and is home to 72% of the population of Nevada (U.S. Census Bureau 2010). Our sample includes neighborhoods in each of the four distinct municipal jurisdictions composing the Las Vegas metropolitan area: eight in the City of Las Vegas, four in North Las Vegas, four in Henderson, and six in unincorporated Clark County. Our data on housing foreclosures came from the Housing and Urban Development (HUD) Neighborhood Stabilization Program (NSP) authorized under Title III of the Housing and Economic Recovery Act of 2008. The data provide the approximate number of foreclosure starts for all of 2007 and the first six months of 2008. We use these data to calculate the proximate foreclosure rates at the census tract level, matching the NSP data to the LVMASS survey data by census tract identifiers to create a multilevel data set of individual respondents clustered within Las Vegas neighborhoods.
---
Sampling Frame
For the LVMASS, we used a stratified cluster sampling design to ensure that our sample included neighborhoods with socioeconomic diversity. Using a stratified (by income quartiles) cluster sample, our study resulted in 22 distinct neighborhoods. Our primary goal was to capture neighborhood-level data from "naturally-occurring" neighborhoods that were geographically identified in the same way that most residents identify with their neighborhood. We diverge from studies that rely strictly on census-based boundary definitions and instead collected information from independent neighborhoods that lie within census tracts. In the fall of 2008, through extensive field work, we identified neighborhoods by key physical characteristics within selected census tracts, including contiguous residences, interconnected sidewalks, common street signage, common spaces, common mailboxes, street accessibility, visual homogeneity of housing communities, and barriers separating housing areas such as gates, waterways, major thoroughfares and intersections. 1 For inclusion as a study neighborhood, we specified that there must be at least 50 visibly occupied homes to avoid non-response and invalid addresses. Our final sampling frame of household addresses was compiled from the Clark County, Nevada Assessor's Office which maintains electronic records of all residential addresses. We then randomly selected a range of 40 to 125 addresses from the sampling frame in each neighborhood. The final study population included 1,680 households in 22 neighborhoods and resulted in 664 individual 1 At the time of sampling, there were a total of 345 Census tracts in the Las Vegas metropolitan area. Using data from both the 2000 Census and the 2005'2009 American Community Survey 5-year estimates, we compared our study neighborhoods within the 22 Census tracts to the remaining 323 Census tracts along several socio-demographic characteristics, including median household income, percent poverty, racial composition, percent married, percent 65+, educational attainment, median year house was built, and percent owner occupied housing units. We found no significant differences between our study tracts and those not included in the study, leading us to conclude that the neighborhoods we included in this study are representative of the Las Vegas Metropolitan Area in general.
respondents and a 40% response rate2 . The household member with the most recent birthday and over the age of 18 was asked to complete the survey. After excluding cases with values missing on our key dependent variables, our final analytic sample for this study was 643 Las Vegas households. Among those that responded to the survey, there were no statistical differences along any of our observed independent variables between those with missingness on our dependent variables and those without missingness.
---
Survey Instrument
For this study, each household received a letter offering an incentive of a family day pass to a local nature, science, and botanical gardens attraction to participate in the study and a website address for a web-based survey or telephone number to complete the survey by phone. After exhausting the telephone and web-based responses, we used mailed surveys and door-to-door field surveys. The survey was made available in English and Spanish and administered by trained survey administrators.
---
Sample Characteristics
Table 2 shows descriptive statistics of the total sample. Residents in our sample have a mean age of 54 years old and an average length of residence in their neighborhood of 11.7 years. Our sample is 73% non-Hispanic white and 27% non-white. Most of our respondents were employed (93%) and homeowners (80%). Nearly 33% of our sample held at least a college degree, followed by 41% with some college education, and 26% with a high school degree or less. Our analytic sample characteristics differ slightly from 2010 population statistics of the Las Vegas metropolitan area (U.S. Census Bureau 2010). In addition to our sample being older and slightly more educated than the average resident, we also have more homeowners in our data. Because our random sampling methodology did not discriminate by housing type (single-family housing vs. multi-family housing), our sample returned very few places of multi-family housing. As a result, we have undersampled those most likely to be in renting situations and living in apartment complexes, including younger residents, those with lower incomes, and those with shorter residential tenure. These sampling disparities may bias results toward more established middle-class homeowners in the Las Vegas metropolitan area if controlling for demographic and socioeconomic characteristics do not fully capture attitudinal differences concerning neighborhoods between middle-class and working-class households.
---
Dependent Variables
The majority of our survey instruments were replicated from the Phoenix Area Social Survey (PASS), including our key dependent variables. The first dependent variable in the LVMASS comes from a survey question that captures the perceived quality of life in the neighborhood. Residents were asked to rate the overall quality of life in their neighborhood as "Very Good," "Fairly Good," Not Very Good," and "Not at all Good." Neighborhood Quality was coded 1(Not at all Good) to 4 (Very Good). The second dependent variable comes from a four-point Likert scale that asks respondents to rate their satisfaction with the economic value of homes in the neighborhood. Specifically, respondents indicated whether they were "Very Satisfied," Somewhat Satisfied," "Somewhat Dissatisfied," or "Very Dissatisfied" with the economic value of the homes in their current neighborhood. We arrange the responses from the most negative response of 1 (Very Dissatisfied) to the most positive response of 4 (Very Satisfied). This measure taps residents' perceptions of home values, not actual home values, as most home prices were in decline at this time. For the regression analyses we maintain the ordinal level of measurement of these variables.
---
Key Neighborhood-Level Independent Variables
First, from the 2008 NSP data, we assess census tract foreclosure rates from the number of new foreclosure starts that occurred between 6-18 months preceding the LVMASS. These are the first data since the Great Recession to allow scholars the opportunity to examine the relationships between neighborhood-level foreclosure rates and residential neighborhood sentiments. To test the reliability of HUD's estimated foreclosure rate at the local level, HUD asked the Federal Reserve to compare HUD's estimate to data the Federal Reserve had from Equifax showing the percent of households with credit scores that were delinquent on their mortgage payments 90-days or longer. Analysis by the Federal Reserve staff found that when comparing the HUD-predicted county foreclosure rates to the Equifax county level rates of delinquencies, HUD's data and the Equifax data had high intrastate correlations. For the state of Nevada, the correlations were 0.88 (Department of Housing and Urban Development 2008). After merging the NSP data with LVMASS data, the average neighborhood foreclosure rate is 21.6%, which corresponds closely to the average foreclosure rate of 22% from the 345 census tracts reported for Las Vegas metropolitan from the NSP data. Harding (2009) identifies three distinct phases of the foreclosure process: a period of delinquency leading to foreclosure, a period wherein the bank takes possession of the property (i.e., it becomes a REO: Real-Estate Owned Property), and the resale period after the REO transaction. Our foreclosure measure best captures the later stages of the first step in this process. Prior research suggests a lagged foreclosure effect on the property values of nearby residents in the neighborhood. Harding et al. (2009) finds that that the maximum negative effect of a foreclosure on home values of nearby properties occurs right around the time of the REO transaction, whereas Gerardi et al. (2012) finds that the negative effect of foreclosures on nearby properties peak before the distressed properties complete the REO transaction. Our measure of foreclosure starts up to 18 months prior to the launch of the LVMASS should overlap quite nicely with when we would expect there to be peak foreclosure effects on sentiments concerning one's neighborhood and property values. At a minimum, the temporally variant nature of the foreclosure process (and its effects) means our measure certainly captures a period when the crisis is unfolding, but it may or may not capture the exact peak of the crisis and could therefore underestimate the full magnitude of the crisis. However, the timing of the LVMASS and our foreclosure measure is also advantageous because it captures the first wave of mass foreclosures. If the study was conducted a year or two later at the absolute peak, then there might not have been enough neighborhood variation to detect statistically significant effects.
Second, we construct a measure of neighborhood disorder from an index of five items asked in the LVMASS. We asked respondents whether vacant land, unsupervised teenagers, litter or trash, vacant houses, and graffiti in their neighborhoods are a big problem (coded 3), a little problem (coded 2), or not a problem (coded 1). The index ranged from 5 (Lowest Disorder) to 15 (Highest Disorder), is normally distributed, and has a Cronbach's alpha score of 0.74, indicating sufficient internal consistency among items. To create a neighborhood level measure we then calculated each neighborhood specific mean from this scaled index.
Third, our measure of collective efficacy or "neighborliness" was composed of five items that assessed respondents' evaluations of neighborly interactions. The items were: "I live in a close-knit neighborhood," "I can trust my neighbors," "My neighbors don't get along" (reverse coded to match the direction of the other items), "My neighbors' interests and concerns are important to me," and "If there were a serious problem in my neighborhood, the residents would get together to solve it." Responses ranged from strongly disagree to strongly agree. The index ranges from 5 (Least Neighborly) to 25 (Most Neighborly), is normally distributed, and has a Cronbach's alpha of .79. We calculated neighborhood specific means to create a neighborhood-level measure of collective efficacy for each of our 22 neighborhoods. 3
---
Control Variables
Previous studies indicate that homeownership and length of residence are important predictors of neighborhood attachment (Kasarda and Janowitz 1974;Sampson 1988;Adams 1992;Rice and Steel 2001;Lewicka 2005;Brown et al. 2004;Schieman 2009). Therefore, we included a dichotomous variable for homeownership (vs. renting) and a continuous variable for length of current residence in years. We also controlled for variables that approximate life-cycle stage and indicate socioeconomic status. Age is a continuous variable. Race is coded White (1) and Non-White (0). Education is categorized into "High School Degree or Less," (ref.
) "Some College Education," and "College Degree or More." Marital Status was a binary variable indicating Married (1) vs. Non-Married (0). Finally, employment status was a dichotomous variable indicating whether the respondent was employed (0) vs. unemployed (0) at the time of survey completion. We find that roughly 7% of the sample was unemployed, which is consistent with the unemployment rate of 7.4% for Las Vegas reported in the 2007-2011 American Community Survey 5-year estimates (U.S. Census Bureau 2011). Additional descriptive statistics are provided in Table 2.
---
Analytic Approach
Multilevel methods are employed for this study to address the issueof non-independence caused by the clustering of residents within neighborhoods. Multilevel models address the issue of non-independence by appropriately adjusting the standard errors of the independent 3 We acknowledge the absence of non-resident input in our measures of neighborhood reputation. Non-resident viewpoints are important to consider when policy decisions are being made about urban development and resource redistribution that affect neighborhoods. However, we assume a good deal of correspondence between resident and non-resident perceptions of neighborhood reputation, and although there is likely to be some slippage between resident and non-resident viewpoints, it is unlikely that these viewpoints would be so discrepant as to render a fundamental misinterpretation of neighborhood reputation. Of course we have no way of testing this directly, but we welcome further empirical inquiry on this matter.
variables. More specifically, for this study we estimated several multilevel models for ordinal response variables. These multilevel ordinal logistic models assess the relationship between neighborhood foreclosure rates, neighborhood disorder, and neighborhood collective efficacy on individual sentiments regarding neighborhood quality and neighborhood property values. The specification of these multilevel ordinal logistic models maintains the proportional odds assumption required by ordinal logistic regression (Raudenbush and Bryk 2002:320). Importantly, by taking a multilevel approach, this study is also able to determine the proportion of variation in residents' sentiments that exists across neighborhoods, and we are then able to determine how much of that neighborhoodlevel variation is explained by our key independent variables. The analysis proceeds in five steps. First, for both dependent variables a null model with no predictor variables is estimated to determine the amount of neighborhood-level variation in residents' sentiments toward their neighborhoods. Second, we include individual-level control variables to minimize any conflating of the variance components that may be attributed to the compositional characteristics of the neighborhoods (e.g., socio-demographic characteristics). Third, we introduce into the model our measures of neighborhood reputation-perceived neighborhood disorder and collective efficacy-to (a) assess the total effect of neighborhood reputation on residents' neighborhood sentiments, and (b) determine how much neighborhood variation in the response variables are accounted for by the inclusion of neighborhood reputation (using the model with just individual-level control variables as the comparison model). Fourth, we remove the measures of neighborhood reputation and add neighborhood foreclosure rates into the model to assess the same empirics as for neighborhood reputation. Finally, we estimate the complete model that includes all the individual-level control variables and our measures for neighborhood reputation and neighborhood foreclosure rates. The objective of this final model is to assess the mediation effects of our key neighborhood-level variables. We rely on the KHB-method to examine the statistical significance of these key mediation effects (Breen, Karlson, and Holm 2013).
---
Results
According to the descriptive statistics in Table 2, a majority of residents (84%) reported a fairly good or very good level of neighborhood quality despite the ongoing foreclosure crisis. Sentiments regarding neighborhood property values are also generally positive in that a slight majority (56%) report being either very satisfied or somewhat satisfied with current home values. Unfortunately, a baseline measure is unavailable to determine whether these reported satisfaction levels are below pre-recession levels. According to the intraclass correlation coefficient (ICC) calculated from the null intercept only model (not shown), approximately 30% of the variation in the sentiments regarding neighborhood quality exits across neighborhoods, and approximately 8% of the variation in sentiments regarding home values exits across neighborhoods. In both instances, there is greater variation in resident sentiment within neighborhoods than across neighborhoods, which is likely to be the case when studying neighborhood effects within a single metropolitan area. Yet, there is sufficient between neighborhood variation for the primary objective of examining the relative role of neighborhood reputation versus neighborhood foreclosure rates in shaping individual-level sentiments.
Neighborhood reputations are reflected in shared individual perceptions regarding problematic issues in the area and whether there is a common held belief among residents in the collective ability of the neighborhood to address issues if problems arise. On average, neighborhood reputations in Las Vegas during the foreclosure crises are at a 50/50 level on both measures, as the overall means fall approximately halfway on the aggregated scale (e.g., ave. neighborhood disorder = 7.68; ave. collective efficacy = 13.04). This means that half of Las Vegas neighborhoods enjoy a generally positive reputation, whereas the other half generally has poorer than average reputations. There is also noteworthy geographic variation in projected neighborhood foreclosure rates as the range of rates goes from a low of 15% to a high of nearly 30%. The bivariate correlation between the two components of neighborhood reputation-disorder and collective efficacy-and foreclosure rates are high (.809 and -.737, respectively), These correlations indicate a strong positive association of high levels of perceived neighborhood disorder and high foreclosure rates, and the strong negative association of low levels of perceived collective efficacy and high foreclosure rates. Note that collinearity is not a concern in the regression models as the variance inflation factor for the foreclosure rate (VIF = 3.45) is below even the modest cut point for concern (e.g., 4).
Table 3 provides the results from an analysis that disentangles the relative influence of neighborhood foreclosure rates and neighborhood reputation on individuals' sentiments regarding the general quality of the neighborhood and regarding property values. The results from six multilevel ordinal regression models (three for each outcome) are presented in Table 3. The null models that contain only individual-level controls (not shown) provide the baseline variance components that are used for comparative purposes with the results that appear in Table 3. First, note that there are several individual-level effects that are generally robust throughout the analysis. More highly educated individuals are more critical of the quality of their neighborhood, whereas age is positively associated with an individual's satisfaction with the current property values. 4 Homeownership is positively associated with neighborhood quality, although after conditioning on neighborhood reputation, homeownership fails to attain statistical significance. On the other hand, homeownership is negatively associated with the satisfaction level of the neighborhood's property values, suggesting a greater level of insecurity homeowners feel about what is usually their most valuable financial asset. The neighborhood-level variances from the models with only the individual-level controls are 1.004 for neighborhood quality and .200 for neighborhood property values. The respective intraclass correlation coefficients are .30 and .06, which are very similar to the ICCs from the intercept only models, meaning the compositional effects stemming from these individual-level characteristics is minimal.
Model 1a and Model 1b in Table 3 report the effects of neighborhood reputation on individual sentiments regarding the general quality of their neighborhoods, and their satisfaction toward property values in the neighborhood, before accounting for the foreclosure rate. The effects from perceived problems with neighborhood disorder and collective efficacy on assessments of neighborhood quality are strong and statistically significant beyond a 99.9% confidence level. For example, a one unit difference in perceived neighborhood disorder (i.e., nearly a standard deviation) is associated with a 32% [1-(exp (.391) = .676)*100] decline in the average resident's odds of reporting a "not very good" response toward neighborhood quality compared to a "fairly good" assessment. A one unit difference in collective efficacy is associated with a 38% [1-(exp (.321) = 1.38)*100] increase in the odds of reporting a positive response toward neighborhood quality compared to a negative assessment. The effect of collective efficacy is also strong when considering assessments of neighborhood property values in Model 1b. There a one unit difference in collective efficacy is associated with a 33% [1-(exp (.285) = 1.33)*100] increase in the odds of reporting being "somewhat satisfied" verses "somewhat dissatisfied' with neighborhood property values. The effect of perceived neighborhood disorder fails to attain statistical significance in Model 1b, suggesting a lesser role of perceived disorder on property assessments than collective efficacy. Considering these measures together, we can say that neighborhood reputation does a very good job of explaining neighborhood-level variation. The proportional reduction in neighborhood-level variance is 97% [(1.004-.030)/1.004)] for assessments of neighborhood quality, and 99.5% [(.200-.001)/.200] for neighborhood property values. Even when starting from modest intraclass-correlation coefficients to begin with, the reduction in level-two variance attributed to neighborhood reputation is noteworthy. 3 assess the relationship between foreclosure rates and assessments of neighborhood quality and neighborhood property values prior to adjusting for neighborhood reputation. As expected, the effects of foreclosure are negative and statistically significant. A one percentage point increase in a neighborhood's foreclosure rate is associated with a 22% decline in the average resident's assessment of the quality of their neighborhood and an 11% decline in the average resident's satisfaction with neighborhood property values. Foreclosure rates also explain neighborhood variation in resident's sentiments, but the explanatory power of foreclosure rates is not as impressive as it is for neighborhood reputation. Foreclosure rates account for 74% of the neighborhood variation in assessments of quality, but only 7% of the neighborhood variation in the assessments of property values.
---
Model 2a and Model 2b in Table
The theoretical motivation for this study concerns the role of neighborhood reputation in shaping individual sentiments during a crisis period. One perspective advanced here, via the foreclosure crisis hypothesis, suggests that the effects of neighborhood reputation may be largely filtered through objective neighborhood circumstances when a crisis strikes causing the effects of neighborhood reputation to be less salient than during ordinary times. In support of this perspective, we should expect objective measures of neighborhood foreclosure during an economic crisis to significantly mediate the effects of neighborhood reputation. According to the results in Model 3a and 3b in Table 3, we find rather limited support for this perspective.
When foreclosure rates are added to the model with the covariates for neighborhood reputation, the effect of neighborhood disorder attenuates by over a third when examining sentiments of neighborhood quality (b = -.391 vs. b = -.250), and when considering assessments of property values, the mediation effect of foreclosure on perceptions of neighborhood disorder are upwards of 88% of the initial effect [e.g., (-.056 + .007) / -.056]. However, several patterns in the results temper these findings. First, although foreclosure rates do attenuate the effects of neighborhood disorder, the initial effect of disorder on property assessments is not statistically significant and the direct effect of foreclosure in Model 3b also fails to attain statistical significance. Second, the attenuation of collective efficacy after adjusting for foreclosure in both Model 3a and 3b is minimal.
Drawing on disaster recovery research, this study also posited an alternative hypothesis regarding the role of neighborhood reputations during a crisis. According to the neighborhood resiliency perspective, the relationship between foreclosure rates and the sentiments of residents may be mediated once adjusting for neighborhood reputation because neighborhood reputations may act as a guide for residents during the crisis. This should be especially true of collective efficacy, as neighbors may be more likely to witness the kinds of behaviors that conform to their preconceived beliefs. According to Model 3a and 3b, we find fairly strong support for this perspective, as the foreclosure rate is notably attenuated in both models (-.245 vs. -.077 and -.116 vs. -.028); and rather impressively, the effect of collective efficacy remains robust and statistically significant at a high level (. 321 vs. .301 and .285 vs. .278). Thus, collective beliefs about a neighborhood's ability to prevent and address problematic issues appear to be a resounding aspect of a neighborhood's reputation that continues to shape individual sentiments during a crisis period.
To formally test the statistical significance of these mediation effects we use the KHBmethod (Breen, Karlson, and Holm 2013). We rely on the KHB-method because the method typically used for assessing mediation effects (e.g., the Sobel test) in linear models cannot be used in the context of nonlinear probability models (e.g., those using a logit link) because the change in the mediated coefficient is not only influenced by the mediators but also by a rescaling of the logit coefficients in relation to the error variance. The KHB-method distinguishes the change in the focal coefficient due to true mediation from the change that is due to rescaling. Robust standard errors for the decomposition effects (indirect, direct, and total) are used to get cluster-adjusted p-values.
The results of this formal test confirm our preliminary conclusions. On the one hand, we find that only neighborhood disorder is significantly mediated by foreclosure rates (-.391 + . 250 = -.141; p < .05, one-tail) when assessments of neighborhood quality is the outcome. When satisfaction with neighborhood home values is the outcome, the mediation effect (-. 056 +.007 = -.049) is not statistically significant at even p < .10 level. Collective efficacy is not significantly mediated by foreclosure rates in either model. These findings provide fairly limited support for the foreclosure crisis hypothesis: A foreclosure crisis only appears to modestly shape the relationship between a neighborhood's perceived level of disorder and a resident's assessment of neighborhood quality.
Conversely, we find that the effect of foreclosure rates on individual assessments of neighborhood quality and satisfaction with home values are significantly mediated by collective efficacy and neighborhood disorder (p < .001, two-tail). Collective efficacy accounts for nearly 58 percent, and neighborhood disorder accounts for 42 percent, of the mediated effect of foreclosure on neighborhood quality (-.245 + .077 = -.168). Collective efficacy also accounts for nearly all the mediated effect of foreclosure rates (-.116 + .028 = -.088) on home value satisfaction. These statistically significant effects support the neighborhood resiliency hypothesis: Neighborhood reputations appear to have mitigated the local response of residents to the foreclosure crisis.
---
Conclusion and Discussion
The motivation for this study is based on the premise that neighborhood reputations matter in people's lives, and that they matter especially during unsettled times when preconceived beliefs are likely to be more heavily relied upon to guide residents. Yet simultaneously, this study recognizes that the objective realities wrought by a crisis will also force residents to reevaluate and potentially remake in a new light these previously held beliefs. Among those familiar with living in disaster areas, this is known as "the new normal." Understanding this dynamic interplay involves paying close attention to the way collective behaviors and shared beliefs function during a catastrophe. This study advances our understating of this interplay by examining how objective realities and neighborhood reputations shape individual sentiments as a foreclosure crisis unfolds.
Central to our premise, on the one hand, is whether a particular crisis creates a large enough disjuncture between the current beliefs and new realities to significantly alter resident sentiment toward their neighborhood, or on the other hand, whether residents largely respond to the crisis in a manner consistent with the neighborhood's current reputation thereby either minimizing or maximizing the potential harm of the crisis. These are not mutually exclusive possibilities, but these two perspectives do lead to alternative hypotheses. The former perspective-the foreclosure crisis hypothesis-posits that commonly held beliefs about a neighborhood are affected by the realities of the crisis, and as a result, the reputational effects of a neighborhood should wane once the crisis-related effects are taken into consideration. The latter perspective-the neighborhood resiliency hypothesis-places more emphasis on the durability of a neighborhood's reputation by anticipating a robust, and largely unaffected, correspondence between collective beliefs and individual sentiments despite any objective crisis-related circumstances.
The findings from our study provide qualified support for both perspectives. On the one hand, the effects of perceived neighborhood disorder on the sentiments residents feel toward the general quality of the neighborhood, and their comfort with current property values in the neighborhood, are greatly attenuated once we control for the neighborhood foreclosure rate. In other words, objective realities presented by the foreclosure crisis do affect the collective importance residents place on perceived levels of disorder when assessing, and perhaps reassessing, the quality of their neighborhoods. This finding supports the foreclosure crisis hypothesis. On the other hand, we also find that the effects of neighborhood foreclosure rates on the sentiments residents feel toward their neighborhood are significantly mediated by both neighborhood disorder and neighborhood collective efficacy-with collective efficacy accounting for the majority of the mediation effects in this study. This means that the effects from neighborhood foreclosure rates are in large part filtered through the neighborhood's current perceived status to influence how residents respond to the crisis. This finding supports the neighborhood resiliency hypothesis.
Of particular note in this study are the salient and robust collective efficacy effects. It is actually quite remarkable that neighborhood collective efficacy during the foreclosure crisis, not only significantly mediates the effects of foreclosure rates, but collective efficacy also continues to independently shape individual sentiments. This is remarkable because Las Vegas is a highly transitory city with sufficient speculation about the reputational authenticity of some of the area's newer master-planned communities (cf. Knox 2008). If collective efficacy is this salient under nascent conditions, it is also quite possible collective efficacy will be even more important during a crisis for cities with many older wellestablished neighborhoods. Moreover, collective efficacy might be the key differentiating factor among otherwise homogenous master-planned communities in other areas of the county.
The results of this study should be considered in light of several limitations. First, the LVMASS data are cross-sectional. As such, they represent a snapshot of residents' perceptions of neighborhood quality of life, satisfaction with the economic value of their homes, and attitudes toward neighborliness during the midst of the Las Vegas foreclosure crisis. While the results of this research have demonstrated a robust link between housing foreclosures and residents' sentiments, as well as evidence that neighborhood collective efficacy mediates the effects of housing foreclosures, these data do not allow us test the complete cycle from the crisis event through current neighborhood status to individual sentiments and then back full circle to neighborhood change. Our findings do capture, however, the all-important first stage in this process, and we look forward to collecting longitudinal data that will allow us to model the complete process.
Second, omitted variable bias is always a concern with observational data, and as a result, we caution readers from inferring definitive causality from our results. It is possible that factors other than foreclosure rates and neighborhood reputations affect resident sentiments. For example, residential sorting could bias our foreclosure effects downward if many dissatisfied homeowners had time to relocate before the LVMASS. This also could mean that those residents remaining in hard hit neighborhoods might be more content with their neighborhoods (e.g., for sentimental reasons), biasing the effects of neighborhood reputation upward. However, a more plausible scenario, at least for the beginning of the crisis, would be that dissatisfied homeowners would be unable to move because (a) their properties are underwater (i.e., they owe more than the market value of the property), and/or (b) they simply can't sell because of the lack of buyers. Frustrated residents in this situation would very likely feel resentment for the neighborhood, and importantly, this effect would off-set any downward bias attributed to residential out-migration.
Third, although LVMASS data do not allow us to examine different neighborhood amenities, we suspect that some neighborhoods may be more protected from economic distress and report less negative neighborhood experiences than others because of particular amenities. Future research should explore in more detail whether master planned communities and/or those with homeowners associations are buffered from the negative effects stemming from housing foreclosures. If these communities are commodified in ways that shield them from property value decline (Le Goix and Vesselinov 2012) through covenants, conditions, and restrictions (CCRs), then they might also be shielded from neighborhood quality decline during an economic downturn. On the other hand, to the extent that master-planned communities produce a housing price premium, and to the extent that HOAs fail to mitigate foreclosures, the effects of major boom-and-bust cycles may be especially pronounced in these types of neighborhoods. Future research should explore these possibilities in more detail.
Lastly, our results need to be put into context. A foreclosure crisis is a relatively weak and slow moving crisis scenario compared to several recent natural disasters. Although the prospects for a full housing recovery in Las Vegas remain very much in question-wavering somewhere between the "Sunburnt" city envisioned by Hollander (2011) and that of the "business-as-usual" growth machine (Pais and Elliott 2008)-it would be rather surprising for housing foreclosures alone to completely refashion an area's reputation. In fact, recent evidence suggests that housing values are moving back toward pre-recession levels, with the Las Vegas metro area leading the pack in property value increases (Firki and Muro 2013;Friedhoff and Kulkarni 2013). Comparatively, it is perhaps more difficult to image how collective efficacy, or any other reputational characteristic, can spur resiliency when an entire community is physically leveled and fully displaced. Yet, time and time again, communities are rebuilt from utter devastation, and it would be equally as difficult to imagine how this is possible without an appreciation for the collective trust residents have in their neighbors. What remains entirely unknown are the crisis thresholds and event conditions for when neighborhood reputations matter the most for disaster resiliency. | 59,966 | 1,128 |
469a148cc672e49a5ea4991ca0a2318ae92021aa | Social stratification and adolescent overweight in the United States: how income and educational resources matter across families and schools. | 2,012 | [
"JournalArticle"
] | The current study examines how poverty and education in both the family and school contexts influence adolescent weight. Prior research has produced an incomplete and often counterintuitive picture. We develop a framework to better understand how income and education operate alone and in conjunction with each other across families and schools. We test it by analyzing data from Wave 1 of the U.S.-based National Longitudinal Study of Adolescent Health (N= 16,133 in 132 schools) collected in 1994-1995. Using hierarchical logistic regression models and parallel indicators of family-and school-level poverty and educational resources, we find that at the family-level, parent's education, but not poverty status, is associated with adolescent overweight. At the school-level, the concentration of poverty within a school, but not the average level of parent's education, is associated with adolescent overweight. Further, increases in school poverty diminish the effectiveness of adolescents' own parents' education for protecting against the risks of overweight. The findings make a significant contribution by moving beyond the investigation of a single socioeconomic resource or social context. The findings push us to more fully consider when, where, and why money and education matter independently and jointly across healthrelated contexts. |
Most studies examining the role of social inequalities for adolescent overweight and obesity in the United States focus on differences in family income. An underlying assumption motivating this area of research is that money protects individuals from obesity in today's "obesogenic" society. Several theoretical and practical arguments have been used to buttress this supposition, yet most prior studies do not support this assumption. They do not find a significant negative association between family income and adolescent overweight or obesity in nationally representative samples of U.S. youth (Goodman et al., 2003b;Gordon-Larsen et al., 2003;Martin, 2008;Troiano & Flegal, 1998;Wang & Zhang, 2006;Zhang & Wang, 2007). Nonetheless, many scholars still assert that money should be an important resource that protects adolescents from being overweight.
We seek to better understand how socioeconomic resources matter for adolescent weight in two important ways. First we consider whether a different stratified resource is importantparents' education. Second, we consider whether adolescent weight is associated with the financial and educational resources in schools -another social context that is highly influential for adolescent well-being (Teitler & Weiss, 2000). Both families and schools are highly stratified with regard to both income and parents' education. Thus, we investigate how these two resources in these two contexts influence adolescents' risk of being overweight by analyzing data from the National Longitudinal Study of Adolescent Health (Add Health).
---
FAMILY AND SCHOOL-LEVEL RESOURCES INFLUENCING ADOLESCENT WEIGHT
Researchers often study family income and parents' education together as indicators of socioeconomic status and predict that they have a similar association with adolescent weight given that family income and parents' education are positively correlated (Balistreri & Van Hook, 2009;Goodman, 1999;Goodman et al., 2003a;Goodman et al., 2003b;Haas et al., 2003;Kimm et al., 1996;Strauss & Knight, 1999). We depart from this general approach and seek to unpack how parental education and income are distinctly associated with adolescent weight. This line of investigation and the hypotheses we derive are driven by prior research, which suggests that family income and parents' education may influence adolescent weight in different ways.
We also examine how income and education are associated with adolescent weight across two contexts: the family-and school-level. Research on the latter is relatively novel, which leads us to be more speculative about how school-level income and education are associated with adolescent overweight. Our hypotheses about their importance derive from empirical results in two prior studies and from weaving together several strands of prior research on schools. Bringing schools' socioeconomic resources into the research on adolescent overweight and obesity is a significant contribution given the paucity of research on this topic and the strong influence of schools on adolescents' lives (Teitler & Weiss, 2000).
---
Family Resources and Adolescent Overweight
Because most American adolescents are dependent on their parents, adolescent stratification is a function of their families' socioeconomic status, meaning their parents' income and education. We argue that parents' education and family income capture unique resources and patterns that can influence adolescent weight.
Family income provides family with the power to purchase goods and services, depending on their relative prices. In general, "healthy" food is relatively expensive and "bad" food is cheap (Drewnowski & Specter, 2004). Furthermore, the costs for adolescents' physical activity are rising as schools implement pay-to-play policies for organized sports (McNeal, 1998). As such, scholars have argued that greater family income can affect an adolescent's ability to maintain a healthy weight because it increases their ability to purchase "healthy" weight-related goods (Cawley, 2004). This theoretical perspective is pervasive in the literature and leads to the argument that family income should be negatively correlated with adolescent weight.
Despite the dominance of the supposition of a negative correlation between family income and adolescent weight, income could also be positively correlated with adolescent weight. Instead of using money to promote a healthy weight, families and adolescents could spend their money on goods that generate risks for adolescent overweight, such as video games, or meals prepared away from home.
The empirical evidence regarding the association between income and adolescent weight is mixed and generally does not fit with the dominant perspective that the correlation between income and adolescent weight is negative. Only one study finds a significant, negative association between family income and adolescents' weight in a nationally representative sample (Goodman, 1999), while other studies find a negative association only for narrowlydefined adolescent subpopulations (Balistreri & Van Hook, 2009;Goodman et al., 2003b;Gordon-Larsen et al., 2003;Kimm et al., 1996;Miech et al., 2006;Troiano & Flegal, 1998;Zhang & Wang, 2007). Evidence supporting a positive association is also scarce: only one study finds a positive association among a nationally representative sample of adolescents (Haas et al., 2003). The majority of studies find no link between either family income or poverty and adolescent weight (Goodman et al., 2003b;Gordon-Larsen et al., 2003;Martin, 2008;Troiano & Flegal, 1998;Wang & Zhang, 2006;Zhang & Wang, 2007).
More consistent is a small body of literature demonstrating that parents' education is significantly and negatively associated with adolescent weight among U.S. adolescents (Goodman, 1999;Goodman et al., 2003b;Haas et al., 2003;Martin, 2008;Sherwood et al., 2009). Some may interpret this finding as a different way of measuring family income, but that interpretation ignores evidence that parents' education captures other resources, net of family income, that shape adolescents' health and well-being.
First, schooling contributes to learned effectiveness -a sense of control to accomplish goals, including those that are health-related (Mirowsky & Ross, 2003). More highly educated parents, thus, have more learned effectiveness, which should make them more likely to believe that they can influence their child's weight. Further, prior research shows that when parents try to regulate what their children eat (Ogden et al., 2006) and how active they are (Arluk et al., 2003), their children are generally leaner and less likely to be overweight.
Second, education provides parents with general capabilities, skills and knowledge (Becker, 1993) and correlates with the volume and breadth of their health-related knowledge (Link et al., 1998). We expect that education is positively correlated with an understanding of obesity's etiology and possible consequences. Research bears this out. Lower educated parents tend to rely on folk understandings about what signifies a healthy weight for youth (Jain et al., 2001) and underestimate the incidence of youth overweight and obesity (Goodman et al., 2000). One factor that might explain this is that higher educated parents are more likely to engage with medical professionals about their child's health (Lareau, 2003). Furthermore, a better understanding of obesity has been shown to prevent weight gain. Highly educated adults are less likely to be obese because of their greater awareness of the association between diet and disease (Nayga, 2000). We anticipate that this awareness carries over into how more educated parents feed and socialize their adolescent children and, thus, could influence adolescents' own weight-related choices.
Together, the arguments and empirical evidence lead us to expect that parents' education but not income will be related to adolescent overweight because of the knowledge, skills, experiences, and perspectives that are associated with more formal education. We do not discount the importance of income. Instead, we hypothesize that income matters at the school-level because money strongly shapes the amenities and stressors in adolescents' nonfamilial environments.
---
School Resources and Adolescent Overweight
We focus on schools, because outside of families, schools are the primary social institutions that organize adolescents' lives. During the academic year, adolescents spend the majority of their day at school (Zick, 2010). Schools also shape adolescents' daily activities and friendships through their extracurricular offerings (Guest & Schneider, 2003) and by organizing students into grade levels and academic tracks (Kubitschek & Hallinan, 1998).
Schools also influence what adolescents eat, do and value (Story et al., 2006;von Hippel et al., 2007). Most adolescents eat at least one meal per day at schools, which serve breakfast and lunch and have vending machines available on campus (Delva et al., 2007). Adolescents' physical activity is affected by the availability and quality of a school's physical education courses, extracurricular activities, and exercise facilities (Leviton, 2008;Sallis et al., 2001). Finally, school-based cliques influence students' weight-related norms and values (Ali et al., 2011b;Paxton et al., 1999), which in turn shape their dieting and weight-control behaviors (Ali et al., 2011a;Mueller et al., 2010).
Despite the large role that schools have in adolescent's lives, few studies have examined how school-level resources influence adolescent weight. Two exceptions focus on parallel resources to those that we investigate at the family level: the average family income of schools (Richmond & Subramanian, 2008) and the average education level of parents in the school (O'Malley et al., 2007). In these studies, both school-level resources have a significant, negative association with adolescent weight. Unfortunately, neither study addresses whether the associations they uncover are confounded by other school-level resources, including the alternate measure of school socioeconomic resources. Other potential confounders are also not addressed well. O'Malley and colleagues (2007) control for many school characteristics, the adolescent's race/ethnicity and parents' education in their statistical models, but not other individual-or family-level factors that predict adolescent weight. Richmond and Subramanian (2008) account for a limited number of individual-and family-level predictors of adolescent weight, but only include one schoollevel confounder -the school's racial/ethnic composition.
A primary contribution and strength of this study is the examination of whether the average family income and parental education level in schools are related to adolescent overweight net of each other and other school-, family-, and individual-level confounders. Furthermore, as the first study to examine parallel income and educational resources across family and school contexts, we can present a more complete picture of how these resources are related to weight across the two most important social institutions in adolescents' lives.
An additional contribution is that we develop explanations for how and why school-level income and parents' education are associated with adolescent overweight. Given the paucity of research on this topic, we offer new, but speculative arguments and predictions, bringing in related research where possible. We hypothesize that the average family income of a school better predicts adolescent overweight than does the average education level of parents within a school. Furthermore, we expect the estimated effect of school-level income to be nonlinear. We hypothesize that poor schools are particularly risky for adolescent overweight relative to both middle-and high-income schools.
These suppositions stem from several factors. First, school-level income is highly correlated with the school funding, despite states' redistributive efforts (Corcoran et al., 2004;U.S. Government Accountability Office, 1997). Further, school funding plays a direct and important role in a school's food provisions and ability maintain facilities and curricula that promote physical activity. Richer schools generally offer healthier à la carte and vending options than poorer schools (Delva et al., 2007) and can fully finance school physical education programs and extracurricular activities (Leviton, 2008;Story et al., 2006). Poorer schools have frequently had to cut physical activity programs given recent pressure to focus on academic test scores (Leviton, 2008;Story et al., 2006). Yet physical education and extracurricular programs are particularly important for middle and high school students given that physical activity falls precipitously during adolescence (Must & Tybor, 2005). In addition, after-school programs (of any kind) could help adolescents maintain a healthy weight because their participation limits adolescents' time available for snacking and watching television (von Hippel et al., 2007).
Second, school poverty may also be associated with adolescents' weight indirectly. Poor schools have a greater prevalence of juvenile delinquency, disorder, and classroom disruption (Mrug et al., 2008), making them stressful environments that induce individuals' stress response. Unfortunately, chronic activation of the stress response increases abdominal fat (Anagnostis et al., 2009;Bjorntorp & Rosmond, 2000;Fraser et al., 1999). This further buttresses our hypothesis that poor schools are adverse weight-related environments.
We speculate that a school's average parental education level and the prevalence of highly educated parents, in particular, could indirectly be associated with adolescent weight. Because highly educated parents make more demands for school improvements (Lareau, 2003), we expect that schools would face more pressure to maintain or improve aspects related to adolescent weight as the average of parents' education increases. Yet these efforts may be futile if the associated financial costs are high. For example, seemingly simple suggestions like eliminating advertisements for and availability of high-calorie foods and beverages in schools come at a cost because many schools rely on food industry subsidies to fund academic and extracurricular programs (Nestle, 2002). Therefore, we predict that school poverty constrains the relative influence of parents' collective education within a school.
---
The Intersection between School Poverty and Own Parents' Education
Our study asks one final question about family-and school-level resources: Does familylevel parental education and school-level poverty work in conjunction to produce a joint association with adolescent weight? We speculate that school-level poverty modifies the association between family-level parental education and adolescent weight. This proposed interaction is motivated by theories of resource multiplication and resource substitution (Ross & Mirowsky, 2006). Resource multiplication theory argues that various resources accumulate to impact health (Ross & Mirowsky, 2006). In our study, this would imply that adolescents of highly educated parents in rich schools have more opportunities for maintaining a healthy weight. Those opportunities would cascade and amplify the effects of each other. As such, differences in adolescent weight by parents' education would be larger in rich versus poor schools.
Conversely, resource substitution theory predicts that various resources can have a compensatory dynamic that offsets the risks (or advantages) of another resource for one's health (Ross & Mirowsky, 2006). In our analysis, this would lead us to expect that adolescents with more educated parents are better buffered against the weight-related risks of attending a poor school. In this scenario, differences in adolescent weight by parents' education are greatest in poor schools and relatively diminished in rich schools. A priori, we think both processes are plausible.
In summary, we argue that the relationships between financial resources, educational resources and adolescent weight are complicated. We agree with scholars who argue that money is important for weight and we agree that parental education is important. But we argue that the function and relative importance of these resources varies across families and schools. We offer an initial examination of these parallel resources by exploring whether there is any evidence for the differential associations we propose across the family-and school-level by analyzing cross-sectional, nationally representative data.
---
DATA AND METHODS
Add Health is a United States school-based sample of 20,745 1994-1995 7 th -12 th graders from over 140 high schools and middle schools (Udry, 2003). The original sample, which was followed up in 1995-1996, 2001-2002 and 2007-2008, includes oversamples of Cubans, Puerto Ricans, Chinese, and high socioeconomic status African Americans (Harris et al., 2003). Human subjects approval for this study was obtained from the Pennsylvania State University's IRB. We received an expedited review for secondary data.
Our analysis relies on the 1994-1995 Wave 1 data. This is the only survey wave when parents were interviewed (and, thus, family income measured) and when school-level characteristics were obtained. Changes in family and school resources cannot be assessed. In addition, a significant proportion of adolescents are not in their Wave 1 schools by Wave 2. Some students have made normative transitions from middle school to high school and some have made non-normative transfers to other schools (Riegle-Crumb et al., 2005). In addition, adolescents who were high school seniors in Wave 1 were not followed in Wave 2. Thus, for nearly a third of our sample, Wave 1 school characteristics no longer characterize their Wave 2 schools. By Waves 3 and 4, Add Health respondents are no longer in secondary school. Despite these limitations, Add Health is still the best data source for our study. No other nationally representative data set collected since Add Health contains the requisite information on adolescents' schools and families or has data on so many factors that are confounded with socioeconomic status and weight.
We make the following sample restrictions. We randomly select one adolescent per family using STATA's random number generator if a family contributes more than one sibling to the Add Health sample. We do this because siblings cannot be treated as independent observations and, to estimate a more complicated three-level HLM model (with individual students nested within families within schools), we would need to drop 70% of the sampled families because they have only one sampled adolescent in Add Health. We exclude adolescents who were pregnant or had an unknown pregnancy status between 1994 and 1996 to avoid confounding due to the joint determination of weight and fertility. Finally, we dropped adolescents who did not have a valid sampling weight or did not attend an Add Health school.
We utilize multiple imputation to replace any missing data on analytic variables, which replaces missing values with predictions from information observed in the sample (Rubin, 1987). We use the supplemental program "ice" within STATA 9.0 (Royston, 2005a, b) to create five imputed data sets. The imputation models include all of the variables included in the empirical models, as well as each parents' occupation and adolescents' Wave 2 weight. We estimate the empirical models for each imputed data set and then combine the results, accounting for variance within and between imputed samples to calculate the coefficients' standard errors (Acock, 2005;Rubin, 1987). The final sample is 16,133 adolescents in 16,133 families attending one of 132 schools. Given Add Health's design (Chantala & Tabor, 1999), there are, on average, 128 interviewed students per school (range: 16-1,443; interquartile range: 67-136).
Overall, 33% of our sample has missing data on at least one analytic variable. The variable with the most missing data is family income, the variable that we use to assess poverty status. Among sample members, 26% has missing data on income, where 15% are missing because parents did not complete a parent questionnaire and 11% is due to item nonresponse. Because this is a primary study variable of interest, we confirm in several supplemental tests that neither missing income data nor our imputation procedure biases our study results. The robustness checks include (1) estimating the models on a listwise deletion sample, (2) substituting an alternative indicator of poverty (i.e., parents cannot pay bills) that has less missing data (only 2.3% due to item non-response), and (3) including flags for whether family income or any other data are missing. Regarding the latter, we find that the flag for missing family income is never statistically significant, but the flag for any missing data is positive and statistically significant. Yet our substantive conclusions about our key variables do not change with any of the three robustness checks. (Results available upon request.) Despite the importance of schools for adolescents, some may worry that our school measures are simply capturing neighborhood characteristics. Yet American schools, especially high schools, typically draw from multiple neighborhoods. For the schools in our sample, the median number of census block groups (each containing approximately 1,000 residents) per school is 29 (range: 2-286), the median number of census tracts (a common measure of U.S. neighborhoods containing approximately 4,000 residents) is 15 (range: 2-231), and the median number of counties is 3 (range: 1-9). In our sample, schools are not reducible to neighborhoods.
---
Measures
Adolescent Overweight-This dichotomous variable is based on adolescents' Wave I self-reported height and weight, which we use to construct age-and sex-specific BMI percentiles using U.S. Centers for Disease Control and Prevention guidelines (Ogden et al., 2002b). We then classify adolescents as overweight or obese (BMI ≥ 85 th percentile) versus normal weight or underweight. In supplemental models, we also predict BMI z-scores with a linear model and arrive at the same substantive conclusions. (Results available upon request.)
Family Resources-We measure parents' education, as years of completed schooling (Ross & Mirowsky, 1999). In two-parent families, it is the average of both parents' education. These data are first obtained from the parent, but are supplemented with the adolescent's report when parent-reported data are missing. If both reports are missing, it is multiply imputed. Supplemental models find nearly identical results using maternal education.
For income, we create a dichotomous variable indicating that the family is poor (=1) based on parental reports of the total, pre-tax income the family received in 1994, the family's composition, and the U.S. Census Bureau official poverty thresholds for 1994 (United States Census Bureau, 2005). We focus on poverty rather than other income specifications for ease of interpretation and comparability to other studies. That said, we also estimated models using several alternative measures to ensure that our findings are insensitive to how family income is operationalized. The specific measures are as follows: (1) a linear measure of the family's originally reported total, pre-tax income, (2) the started log of income (i.e., ln[income + 1]) to have a more normal distribution of income and allow for nonlinearities whereby a $1 increase in income is more consequential at the bottom versus the top of the income distribution, (3) five dichotomous variables to indicate where, within six income percentile categories, the family income falls to examine nonlinearities throughout the income distribution, and (4) a linear measure of the family's income-to-needs ratio, which is calculated as the ratio of the family's income to the U.S. Census Bureau's official 1994 poverty threshold for their family type. The substantive results are identical across these measurements (see Appendix Table 1).
School Resources-School-level parental education is measured as the median of parents' years of schooling for attending students. School income is defined as the percentage of students in poverty, to parallel our measure of family poverty. We operationalize this variable by aggregating Wave 1 family poverty data for children attending each sampled school to calculate the percent who are poor. In supplementary analyses, we also investigate aggregations of the four other family-level measures of income (described above). Results from these supplementary models, shown in Appendix Table 1, reinforce our theoretical emphasis on school poverty; all of the nonlinear models demonstrate that the key differences are at the bottom of the school income distribution.
Control variables-The models control for the adolescent's age (measured in years), racial/ethnic identity (non-Latino, white = reference, African American, Latino, Asian, other), parental obesity (neither parent obese [reference category], both parents obese, mother obese, father obese), and dummy variables for whether they are female (=1), disabled (=1), born in the United States (=1), and/or athletic (=1). Most individual-and family-level control variables derive from the adolescent's Wave 1 self-reports. Racial/ ethnic identity is based on questions with predetermined categories, but with the option to select more than one. Adolescents' athleticism is based on reports of participating in an organized school sport and/or playing an active sport or exercising five or more times a week during the past week. We include this variable because BMI conflates fat mass with fat-free mass (i.e., muscle and bones). Parental obesity is based on the parent's report of whether the adolescent's biological mother and/or father is "obese."
We also control for school characteristics to guard against confounding with school resources and to account for Add Health's complex survey design. These include the school's size, regional location (west, midwest, south, or northeast [reference category]), urbanicity (suburban, rural, or urban [reference category]), whether it is a public school (yes =1), and the school's racial/ethnic composition (% African American, % Latino, % Asian, % other, % non-Latino white [reference category]). The school's racial/ethnic composition is derived from aggregating across attending students' characteristics, while the others are derived from Add Health's administrative data.
---
Statistical analysis
We use hierarchical logistic regression models in HLM 6.0 to model the effects of both family-level (i.e., "level 1") and school-level (i.e., "level 2") resources for adolescent overweight. Hierarchical models separate between-group (here, defined as schools) and within-group variance to provide accurate estimates of parameter effects and standard errors, adjusted for the non-independence of people in the same group (Bryk & Raudenbush, 1992). We estimate four models. The null model identifies the extent to which adolescent overweight clusters within schools. The second model includes all individual and family characteristics, as well as the school-level variables used in Add Health's sampling design (i.e., size, region, urbanicity, and school type). The third model adds school-level income and school-level parental education, as well as their racial/ethnic composition. Fourth, we add interactions between parents' years of schooling and school-level poverty.
---
RESULTS
We begin by describing our analytic sample using weighted descriptive statistics presented in Table 1. Similar to national estimates for the mid-1990s (Ogden et al., 2002a;Troiano et al., 1995), 25% of our sample is either overweight or obese. For the remainder of the text we refer to this group as "overweight." On average, the adolescents' parents have completed 13.2 years of schooling and approximately 19.6% of the adolescents live in poverty. Given that school resources are aggregates of these adolescent data, it is not surprising that the average level of student poverty and the school mean of parental education are similar to the individual estimates. The sample characteristics generally fit with national patterns.
In Table 2 we show calculated correlations between parents' years of schooling, the dichotomous variable for family poverty, and the log of family income to ensure that there is sufficient variation in these family-and school-level resources to estimate their independent effects. The correlation between parents' education and family poverty is -0.34, while the correlation between parents' education and the log of family income is -0.73. These estimates suggest that, while there is notable overlap, there is also sufficient variation to distinguish between these two types of family resources with 16,133 cases. Table 2 also shows that the correlation between family-and school-level poverty is 0.35, the correlation between the log of family income and the school's mean log of family income is 0.46, and the correlation between parents' years of schooling and school's median years of parents' schooling is 0.43. Thus, there is sufficient variation between family-and school-level resources to examine their differential effects on adolescent overweight.
Results from multivariate, hierarchical logistic regression models predicting adolescent overweight are presented in Table 3. We estimate a null model (Model 1) without any covariates to identify the extent to which adolescent overweight differs across schools. The estimated variance between schools (i.e., the intraclass correlation coefficient) is statistically significant, suggesting that there are school-level differences in the prevalence of overweight. The intraclass correlation provides empirical justification for our exploration of school-level factors in a hierarchical model.
Model 2 adds family poverty and parental education to the model. Similar to many prior studies, we find that living in a poor family is not significantly related to whether an adolescent is overweight. In supplemental models, we omit parental education from the model and find that family poverty is still not statistically significant. (Results available upon request.) Therefore, issues of multicolinearity are not driving the null finding for family poverty.
In contrast, the association between parental education and adolescent overweight is statistically significant regardless of whether we include family poverty in the model or not. With each additional year of parents' schooling, the odds that an adolescent is overweight declines by 5% (1-[e -0.055 ] = 0.053 = 5.3%). This suggests that, for adolescent overweight, how much money a family has is less important than parents' formal schooling.
Model 3 adds school-level resources and racial/ethnic composition. As expected, the findings for school resources are the opposite of what we find for families. The median level of parental education in a school is not significantly associated with adolescent overweight, but school-level poverty is. The odds that an adolescent is overweight increase by 1% (e 0.013 = 1.013 = 1.3%) with each percentage point increase in how many students are poor at one's school or by 19.5% with a one-standard deviation increase in school poverty (s.d. = 15%). This result is rather robust given that Model 3 includes such a wide range of individual-, family-and school-level confounders. The significant negative association between (own) parents' education and adolescent overweight is only minimally reduced and remains statistically significant in Model 3. Overall, the results in Model 3 indicate that adolescents are at greater risk of overweight if they attend poor schools and if they have parents with less education.
The remaining question is whether these two risk factors moderate each other. The results in Model 4 show that they do. We find a small, but significant and positive interaction between school-level poverty and one's own parents' education. The association between parents' education and adolescent overweight is not uniform across different levels of school poverty. To clarify the patterns, Figure 1 shows the predicted probability of adolescent overweight as parents' education increases for students who attend schools with average, low, and high proportions of poor students when all other variables in Model 4 are held constant at their mean or modal values. Average school poverty equals the school mean (20%), whereas low and high poverty are defined as one standard deviation (15%) below (i.e., 5%) or above the mean (i.e., 35%), respectively.
Figure 1 indicates that the benefits associated with increased parental education are smallest in the poorest schools and greatest in the richest schools. In the poorest schools, the predicted probability of being overweight among students whose parents have 12 versus 16 years of completed schooling is 0.47 and 0.45, respectively. In other words, the risks are almost exactly the same and relatively high, regardless of whether adolescents have parents that are high school or college graduates. Conversely, in the richest schools, the predicted probability of adolescent overweight is lower overall and the same four-year difference between parents who are high school and college graduates nets a larger reduction in the predicted probability of adolescent overweight. In summary, the protective effects of parents' education are greatest in richer schools and fit patterns of resource multiplication (Ross & Mirowsky, 2006).
---
CONCLUSIONS AND DISCUSSION
This study aims to clarify how the risk of adolescent overweight is associated with socioeconomic stratification across the two primary social institutions in adolescents' lives. We contribute to the literature on socioeconomic status and adolescent weight, showing that these patterns are quite complicated, requiring investigators to draw from different research strands to understand why and how family-and school-level poverty and educational resources could matter for adolescent weight both alone and net of each other.
Our findings make three significant contributions to knowledge about how socioeconomic stratification influences adolescent overweight. First, our analysis demonstrates that net of confounders and each other, parents' education, but not family poverty, is associated with adolescent overweight. We speculate that highly educated parents influence their child's weight by using their learned effectiveness, knowledge, and skills to help adolescents better navigate obesogenic environments. Educated parents also likely transmit their knowledge to their children, which may help adolescents make better weight-related choices themselves. We suspect that family poverty does not predict adolescent weight because, with more money, parents could just as easily buy a bigger cable TV package, meals out, or a house in a distant, new suburb versus buying goods that encourage more physical activity or a healthier diet. In essence, it may take knowledge or a particular outlook for parents to even consider the weight-related dimensions of their purchases.
Second, we demonstrate that poverty matters for adolescent overweight, but at the schoollevel. We speculate that school poverty shapes the weight-related structural features of schools. It likely diminishes a schools' ability to offer students' healthier food choices and physical activity options and may necessitate food industry corporate sponsorships (Nestle, 2002). The stressful nature of poor school environments may also contribute to adolescent overweight given that repeated activation of the stress response increases abdominal fat. An alternative explanation is that poor schools may engender or reinforce weight-related norms that are more accepting of or less averse to adolescent overweight. Schools and the peer groups they foster help define whom adolescents see as appropriate references for social comparisons (Crosnoe, 2000). Some may worry that the significant association between school poverty and adolescent weight simply reflects rich parents self-selecting into better schools, especially given that we aggregate family-level data to measure school poverty. Our analysis actually speaks to this concern. If this were the case, then it would imply that school poverty mediates the association between family poverty and adolescent obesity. Thus, family poverty would to be a significant predictor of adolescent overweight before measures of school resources are included in statistical models (see Model 2, Table 3). We find no such association even when we utilize other specifications of family income in supplementary analyses. (Results available upon request.) We also estimated supplemental models that include a dummy variable for whether the parent agreed with the following statement (48% did agree): "You live here because the schools here are better than they are in other neighborhoods." This variable is never statistically significant in models predicting adolescent overweight, nor is its interaction with family-or school-level poverty. Further, our study results remain unchanged when statistical models include this additional confounder. (Results available upon request.) This further suggests that endogenous sorting into schools does not drive our results.
Our third contribution is that we demonstrate that school-level poverty moderates the association between adolescents' own parents' education and their body weight. This is one of our most important and intriguing findings. School poverty impinges on the protective role of increased parents' education. In high poverty schools, parental education has an almost negligible association with adolescent overweight. This supports Ross and Mirowsky's (2006) model of resource multiplication. The effectiveness of one resource is hampered when other resources are limited. This finding speaks to the power of larger social environments for setting the opportunities and constraints that youth and their families must navigate. In more obesogenic school environments, parents' educational resources may be overwhelmed.
In summary, the manuscript notably advances our understanding of families and schools as stratified health contexts that shape adolescent weight. Our findings are also instrumental in helping explain the counterintuitive null association between family poverty and adolescent overweight. It is not that poverty does not matter. Instead, it matters at a larger level of social organization than the family -the school context, both directly and as a moderator. With additional data, future research should explore the mechanisms that undergird these observed patterns. In addition, future research should consider the degree to which these patterns vary by sex and race/ethnicity. Prior research finds that family income is statistically significant and negatively correlated with adolescent weight among white girls, but not white boys while such sex differences are muted among other racial/ethnic groups (Gordon-Larsen et al., 2003;Wang & Zhang, 2006;Zhang & Wang, 2004). Such research would further reveal how these resources in these two stratified health contexts "get under the skin."
Our findings must be considered within the boundaries of three study limitations. First, the study design is cross-sectional because Add Health does not have longitudinal data on family or school resources. That said, we do not expect the causal process to work in the opposite direction, whereby adolescent overweight affects family and school resources (i.e., as a "health selection" process (Haas, 2006;Palloni, 2006)). Furthermore, the cross-sectional measures of poverty and parental education that we use ensure that we assess these schoollevel resources when students are observably within them. This is particularly important given that many adolescents change schools outside of the formal, age-graded process (Riegle-Crumb et al., 2005). Second, with only 132 schools, we have less power to adjudicate between the relative role of poverty and parental education at the school-level. Finally, the measure of income in Add Health is rather limited because it is based on one question. As such, the estimated coefficients for both family-and school-level poverty could be downwardly biased if there is significant and systematic measurement error. This suggests that, if we had a more detailed measure of income, we could find a significant effect of family poverty, but it also implies that we may be under estimating the role of school poverty for adolescent overweight.
Despite limitations, our study makes significant contributions to understanding which socioeconomic resources matter for adolescent overweight. We move beyond thinking about resources solely in terms of their volume and begin to consider variations in their meaning, operation and effectiveness across different health-related contexts. The study's findings push us to more fully consider why money and education matter independently and the environment within which these different resources are embedded. It is simpler to measure and construct hypotheses about the meaning of different family resources, but given that adolescents spend significant portions of their day outside the home -and, of that time, mostly in schools -it is important to consider the resources of environments outside the home and their associated risks or benefits. Such nuanced lines of investigation are imperative for the development of effective interventions for adolescent overweight and improving population health more generally.
• Family-level parental education and school-level poverty predict adolescent overweight.
• Family-level poverty and school-level parental education do not predict adolescent overweight.
• Family education and school poverty interact: the benefits of having better educated parents are reduced in poor schools. All models include sex, age, race/ethnicity, nativity, disability, athleticism, parental obesity, region, urbanicity, school size, if a public school, and school racial/ethnic composition.
---
Appendix Table 1
Select coefficients from hierarchical logistic regression models predicting adolescent overweight with different measures of income (N = 16, 133) | 42,656 | 1,349 |
ff6709f661f37546ddb281b3f73f4fda4592e5d5 | Exploring socioeconomic inequality in educational management information system: An ethnographic study of China rural area students | 2,022 | [
"JournalArticle"
] | There is currently enough systematic literature presents about socioeconomic inequalities across different disciplines. However, this study relates socioeconomic inequality (SEI) to rural students educational management information systems (EMIS) in different schools in China. The dynamic force of information technology could not be constrained in the modern techno-based world. Similarly, the study was qualitative and ethnographic. Data were collected through an interview guide and analyzed with thematic scientific analysis. Ten male and ten female students were interviewed based on data saturation point. The purposive sampling technique was used for the rural school and students' selection. This study summarizes the findings and brings together in-depth emic and etic findings based on new Marxist conflict theory, exploitation, and domination power lens. The study found that SEI creates disparities among EMIS. Household income inequality has influenced on educational achievements of rural areas' students. Gender-based SEI was not present among students. Family wealth and SES-based exploitation are present regarding EMIS among male and female students. Household wealth is significant for the EMIS. The study put forward a recommendation to the policymakers that exploitation could be overcome among students if the government provides equal opportunities for access to the EMIS. | Introduction
The focus of my study is to explore the socioeconomic inequality (SEI) and educational management information system (EMIS) among rural background students in China. Nowadays, China's fast economic expansion has come at the cost of increasing socioeconomic disparity and a low level of information technology in education.
Similarly, the SEI is also a relationship with the capita gross domestic product (GDP), and it increased to 8.6% between 1979 and 2014. The improvement in ordinary people's living conditions remains modest, and the wealth gap between the affluent and the poor is expanding (Cheuk et al., 2021). Only one percent of Chinese families had more than onethird of total household wealth and education in 2014, while the poorest one-fourth owned less than 2% (Xie and Jin, 2015). Meanwhile, inhabitants in some areas of China have been exposed to substantially increased risks of severe weather and environmental pollution and access to education as a consequence of aggressive economic growth (Liu et al., 2010;Cho et al., 2022;Ye et al., 2022), threatening their quality of life, education, and health (Han et al., 2021). Furthermore, human is expected to rise to 1,563/million persons per year by 2060 in China. Meanwhile, in 2060, the loss of GDP due to an increase in health spending, access to education and information technology, and labor productivity are expected to rise by around 2.1% Organization for Economic Co-operation and Development (OECD) (OCDE et al., 2016). Figure 1 discusses the conceptual understanding of the interlinking disparities process in the picture form.
The idea constitutes a new domain with largely unstudied potential in the systematic literature because there is a SEI problem in the sector of education of EMIS. The interconnection of SEI and the EMIS academic field matures with qualitative methods and techniques through the ethnographic lens. SEI has been the object of various studies in the last two decades, and it has a direct relationship with information technology as well as education. In light of this, in Chinese cities with the fastest expanding economies, such as Beijing and Shanghai in China, pollution-induced disparity and accompanying environmental inequality (EI) between the affluent and the poor may be seen (Xie and Jin, 2015). Nevertheless, SEI and EMIS are still missing in the perspective of students, and it is characterized as disadvantaged academic domain, for instance, ethnic minority communities or low-SES and SEI rural groups bearing a disproportionate distribution in the domain of education and their access to the information management system (IMS). A considerable body of research has been conducted in exploring the people with low socioeconomic status (SES), low income, and low education or non-professional occupation are more likely to experience, but the IMS is a higher level of influence on the education, which could decrease environmental catastrophes, such as air pollution, flood, drought, and extreme heat (Li et al., 2018;Park et al., 2018;Ur Rahman et al., 2021;Zhuo et al., 2021). Hajat et al. (2015) concluded that the above scientific pieces of literature have concentrated on the effects of EI on these three primary SES indicators without taking into consideration possible influencing variables such as family income, education, and MIS. Vassilakopoulou and Hustad (2021) examine the most recent decade's worth of information system (IS) research on the digital divide in the high technical infrastructures and economic conditions. It was found that models of digital disparities were present, and the SEI factor impacts the digital divide among different societies. This particular ethnographic qualitative research paper found the gap in the previous literature and then drew the cyclic process of socioeconomic inequalities regarding the EMIS for rural background Chinese students, which is depicted in Figure 2.
As a consequence, these studies may have failed to explain SEI and the consequent differences in SES effectively. Several social epidemiological factors are missed in the domain of SEI, such as wealth and income, education, and IMS, which are a stronger indication of wellbeing inequality in society, especially for a new generation of students labeling. Over the last two decades, several academic authors have claimed that environmental consequences disparities and family wealth, rather than household income, are more robust indicators of SES in China (Pastor-Satorras et al., 2015;Chu et al., 2020). The family's wealth may represent one's capacity to acquire an apartment in one's chosen location, which could influence the household members' exposure to education and its requirements in the technological world. To acquire a better knowledge of SES and SEI-based in China, it is necessary to look at whether different SEI has an impact on the EMIS and its exposure in general among students (Zheng and Yin, 2022). This is a problem posed in terms of SEI and EMIS among rural students of the middle level (Figure 3). The importance of SEI research done at the rural-level students captures the critical spatial factors of EMIS, which is still ignored in a high gross domestic product (GDP) growth country China. For instance, Wang et al. (2021) pointed out that Information and communication technology (ICT) had a significant influence on the economy and society in recent decades. More precisely, although ICT is critical for promoting socioeconomic development (SED), and its detrimental impact on SED in neighboring regions (education, schools, and academic achievements), meaning that China's provinces have a digital divide that might lead to high socioeconomic growth and the inequality remains constant. The research concluded that some practical policy proposals for the growth of ICT in the future are important to finish inequality among rural communities based on EMIS, reduce the negative impacts of the digital divide, and maximize the advantages of ICT-based SED is possible. Furthermore, Zehavi et al. (2005) found many social inequities exposed by several academic studies. However, most rare, the interaction and entanglement of digital technology, structural stratifications, and the established propensity of "othering" in cultures of education especially using the lens of an intersectional feminist approach, are narrated minor, which is a dire need in the China rural background. We propose that IS research move beyond simplistic notions of digital divisions to examine digital technology as implicated in complex and intersectional power systems and improve our sensitivity to the positionality of individuals and groups within social orders as part of a future research agenda (Zheng et al., 2022). There are other implications for practice and policy, such as going beyond the single-axis analysis of digital exclusion and students' education related to IS s were an excellent lens to study in the future. In light of this, Stewart (2021) argued that academic institutions, academics, administrators, educators, and students have thoroughly appreciated the emergency remote teaching (ERT) strategy. The global world started academic communities throughout switch to ERT. This literature overview combines the four significant themes that emerged from a thematic analysis of the findings. Such as ERT experiences, digital divide, and massive educational/socioeconomic disparities, routinely encountered ERT difficulties, issues, and challenges, and frequently made ERT changes. The study recommends to future researchers that technology is the best tool to teach students without socioeconomic inequalities (Ma et al., 2022). This problem is a long-standing challenge for the Chinese rural students and communities, especially in the sector of education, which could be counted with these particular remedies (see Figure 3).
---
Theorizing social class
The concept of social class was coined by Karl Marx, and Krieger (2001) explained the socioeconomic domain in the form of social class. Social class is a person's economic contacts that lead to the formation of social groups. The production, distribution, and consumption of goods, services, and information, as well as the relationships between them, affect these interactions. As a result, social class is founded on a person's position in the economy, whether as an employer, employee, self-employed person, or unemployed person (in the formal and informal sectors alike). Furthermore, the exploitation and dominance of the people are part of social
---
SEI and EMIS
---
Ethnic
---
Conflict theory of extension in the form of exploitation and domination
Wright describes the relation between exploitation and domination as part of class theory. This conceptualization is most closely aligned with Marxism (or neo-Marxism) (Muntaner et al., 2002;Muntaner and Lynch, 2020), and it describes the processes by which some social classes control the lives and activities of others (domination), as well as the processes by which capitalists (owners of the means of production) gain economic benefits from the labor of others (exploitation) (Wright, 2015). In this perspective, the main distinction between social classes is between those who own and control the means of production and those who are paid to utilize them. Additional subcategories may be added education of the parents, and their children have full exploited in this way; the educational capabilities of non-dominate class students do not grow because they have no access to a high level of education and technology (Breen and Goldthorpe, 2001). The theory of exploitation and domination is applied to the SEI Conceptual perspectives of advanced literature and in-depth themes representation.
---
FIGURE 4
The exploitation and dominance of the people in social classes (reproduced with permission from Krieger, 2001). among Chinese rural students related to EMIS. Some classes have all educational opportunities in information technology, a modern school system, a high level of facilities, and wealthy living. On the other hand, some students have no access to these facilities due to low SES. Similarly, the lens of social class theory is deductively explaining the importance of EMIS for rural background students in China (Ma and Zhu, 2022). In the light of this, the Wright relative power of social classes theory is more powerful in overcoming the socioeconomic inequalities among societies (Wright, 2015; see Figure 5).
---
Frontiers in
For an overview of existing techniques, the study is directed to solve the SEI and EMIS with help of social class theory and its subtheory of exploitation and domination of class. It is to be noted that the importance of SEI is not ignored in the domain of EMIS because family wealth, economic status, and its relationship with students' education are challenging topics in Chinese academic research. This is a common problem encountered while using such a qualitative method to explore the in-depth understanding of the SEI and its interconnected challenges with China rural students at the middle school level (Chen et al., 2021). China is growing to overcome such social epidemiological challenges for the urban students, but the part of the rural students is ignored, and this in-depth, subjective study explored the challenges and solutions from the emic and ethical perspective of the students. Second, we examine the existing Chinese home wealth database and highlight current research limitations, such as the absence of high spatial resolution and economically representative family wealth data that may aid new a domain of EMIS research in China. Third, using a qualitative methodology, we explore the various methodologies (e.g., emic, and etic perspectives) for constructing appropriate SEI proxies throughout the research, which is also highlighted by the global educationist for solving such type of phenomenological challenges in the rural background students. Fourth, concerning SEI and EMIS research in China, we address the advantages and disadvantages of current new SEI proxies for assessing SES, household wealth, education access to all, and its relation to the social class theory of Karl Marx. Fifth, in relation to SEI proxy development in China, we summarize the challenges to data availability and quality, including ethical and privacy concerns, and recommend that policymakers improve the quality and availability of MEIS while removing the SEI in the provision of the IS at school level (Xiong et al., 2022). Finally, we wrap up our research and provide recommendations for future research into new SEI proxies to aid EMIS investigations in China to overcome student socioeconomic disparities.
---
Research design
The qualitative research process was ethnographic. First, we reviewed the Chinese and worldwide literature to construct SEI, SES, EMIS, IS, and IT and then systematically specified the studies emphasizing the social class, domination, and exploitation related to these specific themes. Consequently, the theoretical framework was developed to position the debate on inequalities and their relationship with rural students in China. The population and sample of the students were taken from China one province of the total. In this particular study, we selected Hainan Province rural areas school and their students (see Figure 6).
---
FIGURE 5
Relative power of social classes theory and the socioeconomic inequality (reproduced with permission from Wright, 2015). In this research, the interpretive viewpoint was applied. A subjectivist presumption, which generates reality within a social context, is the foundation of the interpretive viewpoint (Bell and Bryman, 2005). The research used a constructivist methodology. This method leads to the epistemological underpinning of the method (Davis and Sumara, 2002). Similarly, Guba (1989) distinguishes between conventional and constructivist belief systems, in which socially created realities are based on the society's dominant belief system and are viewed and understood differently by many people. Natural rules do not regulate socially produced reality, which is a truth. When an individual's view is based on a single fact, it is not acceptable; alternatively, a consensus of persons is acceptable under a constructivist approach that emphasizes truth. Constructivist views rely on a monistic subjectivist epistemology that postulates questions, with humans posing these questions about the social environment and then discovering the final answer in their own time (Guba and Lincoln, 1989). As a result, it is shown that dialectical repetition employs a hermeneutic technique that is considered constructivist. Similarly, analysis and criticism, reiteration, reanalysis, and recritique are pragmatic criteria for reaching logical knowledge and building strong thinking skills.
---
Frontiers in
The research was based on the author's subjective interpretation of earlier ideas on the link between Chinese middle school rural students' socioeconomic disparities, education, and EMIS. The laddering methodology, which was further explored in the data gathering procedure, was used to eliminate bias from the data. According to Bell and Bryman (2005), the interpretative viewpoint is commonly employed in qualitative research. According to Eriksson and Kovalainen (2015), it is conceptually reliant on explanation. The quality of interpretative research focuses on human sensibility and complexity rather than preset categories and variables (Eriksson and Kovalainen, 2015).
In this study, a qualitative ethnographic technique was applied. The ethnographic study is a way for researchers to get a more profound knowledge of a field by immersing themselves in it to build in-depth information and analyze the people's culture and social environment. Its goal is to "make the unfamiliar familiar" through "making sense of public and private, overt and obscure cultural meanings" (Grech, 2017). A sample of 10 male (middle school students) and 10 female (high school students) students were chosen. The students from the rural region ranged from 11 to 14 years old, and both male and female students took part in the research. The respondents had similar
FIGURE 6
The qualitative research data for the illiteracy rate in China (reproduced with permission from Hannum et al., 2021). features such as age, class, rural school system, and government educational system, and they were chosen purposively among school students (population). Participants were from a rural background. It should be highlighted that all male and female students were from low socioeconomic levels (SES), except one student who was from high SES in this study. During the interview, the participant described this information.
---
Frontiers in
In-depth and unstructured interviews were used to gather information from participants. Voices, knowledge, and perspectives are prioritized in this strategy (Smith, 1999). During the interview, a laddering strategy was also applied. It is one of the psychological interview strategies that's quite useful for field research. The laddering approach has the advantage of allowing researchers to explore the participants' behavior. "This strategy entails asking the interviewee follow-up questions based on their prior responses to acquire a better understanding of the respondents' perspectives" (Veludo- de-Oliveira et al., 2006; see Figure 7).
The interview lasted 30 min and included all of the participants. The interview was done in Chinese to understand the respondent's opinions fully. It was then translated into English to report on this research. The school heads and their parents gave their informed agreement, and then the students were asked to agree to an interview.
In this particular study, the rural schools in the Hainan province of China students were interviewed, and the schools' names are below: "Tengqiao Middle School in Sanya Haitang District, National Middle School in Jiyang District, Meishan Middle School, Meishan Primary School, Baogang Middle School, Yacheng Middle School, and Nanbin Primary School in Yancheng District."
During this period, the participants were advised that their real identities would not be revealed in the reports and that pseudonyms would be used instead. The theme analysis approach helped assess outcomes in earlier investigations. This strategy may provide the reader with detailed, contextual, and culturally sensitive facts. The thematic analysis technique was employed for data analysis, discovering, interpreting, and reporting patterns or themes within acquired data. In this case, narratives were used to describe the outcomes, which helped to clarify them (Braun and Clarke, 2006). One of the benefits of thematic analysis is that it allows researchers to identify patterns in the respondents' statements via a flexible, inductive, and ongoing process of connecting with narratives. As demonstrated in the thematic data analysis funnel system picture (Figure 8), all context is categorized into fluid categories of deductive and inductive themes, subthemes, and coding.
---
Data analysis and findings
The results, which portray the genuine voices of participants and all names and identities, are classified for anonymity and confidentiality, are shown below. The theoretical link between rural students' SEI viewpoint for EMIS and its relationship with social class, dominance, and exploitation is written in the data analysis section (Figure 9).
The participants replied that income, education, and employment are typically used to establish traditional metrics of SES. Similarly, some participants claimed that household SES had influenced their educational achievements during school. Such as, we could not use information technology because we have no exposure and other schools in the urban areas have information technology. Furthermore, some respondents described that family wealth is essential and then we will get a good education. Most schools in the urban area have access to education and information technology. The real verbatim is written below: The laddering approach to explore the participants' behavior (reproduced with permission from Veludo- de-Oliveira et al., 2006). The thematic data analysis funnel system.
---
Frontiers in
"Chinese version ( , Wǒ xīwàng wǒ fùqīn shìgè nǔ mào jiàng, wo zài gāojí xuéxiào xìtong xuéxí) (EMISS-2)." "I wish my father would be a milliner, and I study in the high-level school system" (EMISS-2).
Furthermore, the participants stated that our middle school is good, and we have information technology without discrimination against SES. Family affluence may impact the individual but not the school, and the Chinese government gives almost equal education opportunities.
"I am happy with my parents' socioeconomic status, and there is no difference in the school regarding getting an education" (EMISS-10). Some participants suggested that the impact of household wealth on the student's exposure regarding education is more. The participants might have been readily explained by household income, representing a family's economic wellbeing, and positively connected with education exposure. The distribution of economic values represented substantial divergence for the students in the school. Family wealth is more unequally distributed in urban and rural China. In this regard, one student discussed that household income might be better for getting an education, and SES could create disparities among rural and urban areas students' education. The real words of the respondent have narrated below. . . "I noted that my classmate has high parents' SES, and his teacher taught him after his school time. I wish that I should learn many languages, such as the English language and the American English Language accent, but I could not" (EMISS-20).
According to some participants, there is no strong connection between parents' high SES and access to education. Teachers are from the same family income, and they have not created disparities among students regarding the lesson's teaching. On the other hand, participants replied that household wealth or income becomes a more important SES indicator for educational exposure and getting an education. Given its more excellent stability and more prominent effect on living standards over time, family wealth may reflect a higher degree of education.
"I know there is no educational discrimination because of parents' SES in middle school" (MISS-15).
Because of the issue of socioeconomic disparity in educational achievements, this concept represents a novel and mostly unexplored area of research in the academic literature. In light of this, Jiang found that SES has influenced China's students' educational achievements (Jiang, 2021). Pesando (2021) claimed that social consequences and family wealth for getting an education are more significant predictors. The current study found that SES has a relationship with educational achievements. SES and family prosperity may have an impact on educational achievements. Furthermore, the household and SES influenced students, educational achievements at the school level (Figure 10).
The participants revealed that income shifting does adequately reflect living standards and also does influence the educational achievements of middle school students. Some students agreed that socioeconomic inequalities exist in the students, and some students have a good understanding of EMIS compared to low-income students. The high socioeconomic students can buy computers, laptops, and mobile to get online education. Household wealth inequalities reflect structural and chronic poverty, which further stop students from learning in the online education system. Similarly, household wealth inequality is the form of transient poverty, which is less volatile and more reliable. Wealth, rather than income, is a better predictor for EMIS and information technology. "I wish that I have a fast computer and internet connection for the learning of education" (EMISS-12).
However, the participants were from industrialized family backgrounds, and they described the socioeconomic inequalities that exist in some school systems. The city school system was different from online information-sharing system, and the city school gave all the facilities to their students whenever we were studying online. Now, we shifted to rural area schools, and there is no such reasonable EMIS for students learning. Theme: The socioeconomic status and education.
---
FIGURE 10
Theme: Socioeconomic inequalities and educational information management system.
"I believe that the city school system was sound good, and its EMIS performance was better than the rural school system. My parents shifted village home from the urban industrial areas, and now I feel a difference regarding EMIS and information system in this rural area school" (EMIS-18).
The participants agreed that some of our classmates dropped out of school because of their parents' low household income, which is a dangerous sign for the overall personal career of the students. Income drops the family's living standards, and their students do not remain relatively consistent with getting an education. The participants conditioned here and said that if household income is more robust, it is a good sign for the students' once future or personal careers. The true words of one participant were quoted. "I agree that socio-economic inequality can stop some from getting an education in their career" (EMIS-9). The relationship between EMIS and family salary is interrelated with the instructive career. Members advance cited that family salary makes a great pointer to one's mental capacity since, without the pressure of money, a person can examine and teach very well. Within the final month, a few students' guardians were challenged within the Hainan territory for their children's data innovation get to and web of things. These guardians were from the moo SEI, and they might not give EMIS contraptions to their understudies. Moreover, a few members have talked about how long-term fabric amassing is superior to short-term wage since long-term fabric accumulation sustains the children's instruction, and they can purchase EMIS contraptions for way better learning results within the school. The moo SES of understudies certainly uncovered the non-appearance of different data technologyrelated contraptions.
"I have no information technology-related tools for accessing the EMIS system. The school is closed, and our educational activities are not sustained due to no computer system and mobile phones for the online management information system" (EMISS-7). Wang et al. (2021) revealed that ICT significantly influenced the economy and society in recent decades. Although ICT is critical for promoting SED, the inequality of the digital divide is present. Furthermore, Zehavi et al. (2005) found that many social inequities and digital technology interactions are different in the structure of society, which influence educational culture among students. Stewart (2021) argued that academic institutions, academics, administrators, educators, and students have thoroughly appreciated the ERT strategy. Such implementation is not fruitful due to socioeconomic disparities in the educational institution. The results of the current study were in link with the previous literature. Similarly, SEI exists in the students, and high-income students have a good understanding of EMIS compared to low-income students. Likewise, high socioeconomic students have the capacity to buy computers, laptops, and mobile to get online education (Figure 11).
Participants replied that EMIS exploitation and domination are present to some extent. Similarly, some participants claimed that low socioeconomic students are exploited based on online educational learning. Similarly, we could not use information technology because we have no exposure and other people of high economic status have information technology access in their homes. Furthermore, some respondents described that EMIS is essential for getting a good education. Most students have access to information technology. The authentic verbatim is written below: " , ."
"Wǒ xīwàng jiāli yǒu xìnxī jìshù xiǎo gōngjù, wǒ bù huì zài kètáng xuéxí zhōng bèi bōxuè" (EMISS-16).
"I wish would have information technology gadgets at my home, and I would not be exploited in my class learning" (EMISS-16).
The participants answered that we have no access to EMIS, and our education is exploited due to no access to information technology, and it is one sort of discrimination based on low SES. Family exploitation may impact the student's education at the school level, and the Chinese government should give information technology tools to every student to save their academic life. " , ."
"Wǒ duì zìj ǐ de jiàoyù chéngjī bù mǎnyì, yīnwèi qùnián wǒ yīn wéi wúfã sh ǐyòng xìnxī jìshù huò EMIS ér méiyǒu shàngkè" (EMISS-6). "I do not feel happy with my educational grades because last year, I did not attend classes due to no access to information technology or EMIS" (EMISS-6).
The study participants suggested that exploitation in the school influences students' exposure to EMIS. Dominant household income students represent their selves in the school, and their EMIS exposure is higher than low household income students. For the students in the school, the distribution of economic values showed a significant disparity. The information investigation appears how complex the exchange between the person, his social context, and the instructive framework truly is. It too uncovers the tirelessness of the meritocratic perfect of person organization in instructor, parent, and indeed student discourses. However, at the same time, the problematization of minority and working lesson habitus and the culturalization of "educational failure" appears to drag the plug out of this argument, because it presupposes that a person's understudy is (emphatically) decided by his or her domestic environment. From that viewpoint pupils' , parents' , and indeed teachers' agency as it appears to play a minor part. In this talk, we begin with expanding some limitations and preferences of our paper, whereas in a moment area, the broader social implications of the discoveries are talked about the urban and rural China education system is unequal, and their family SES is unequally distributed. This reason creates dominancy among students, and low SES students have no access to the EMIS at home during Theme: Conflict theory of extension in the form of exploitation and domination.
online classes. The true words of the participant have narrated below. . . "I noted that my classmate has a dominant level in the class, and she has access to the current EMIS in the home. I wish that I should learn about EMIS in the school" (EMISS-11).
Moreover, the participants revealed that the dominant attitude of the students is due to high SES in the class, and they already have access to the EMIS. EMIS and parents' high SES have a strong connection to their educational achievements. Teachers are from the same family income, and sometimes they exploit students of low SES in the classroom. On the other hand, participants said that household wealth or income is more important for EMIS.
The compensatory potential of the school and its staff is accepted to be exceptionally modest or indeed missing. However, the inquiry about appears that a more comprehensive approach centering on the consistency between the domestic and school environment can make a distinction and the talks made clear that instructors feel like being only a pawn in an instructive framework that's emphatically influenced by sociodemographic changes within the broader society. On the one hand, instructors ended up demotivated or experience sentiments of futility, whereas on the other hand, teachers' thoughts approximately the low teachability of ethnic minority understudies are reflected in pupil's sentiments of futility, demotivation, and indeed mental withdrawal from instruction.
"I feel that high SES is more in the female students, and teachers are not doing discrimination based on a gender level" (EMISS-15). Vassilakopoulou and Hustad (2021) described that technology-enabled information lacks in terms of SEI. There is digital divide inequality among different socioeconomic representations. The current study found that access to IS s is different among rural schools. Similarly, the study found that removing SEI based on an information technology system should also be accessible to students in rural areas. Additionally, exploitation in the school influences EMIS. Dominant household income students represent their selves in the school. The distribution of economic values showed a significant disparity related to EMIS. These factors mentioned above create dominancy among students, and low SES students were exploited for EMIS home-based and online classes. The theory of neo-Marxism suggests that domination controls the lives and activities of others frontiersin.org Ye 10.3389/fpsyg.2022.957831 (Muntaner et al., 2002;Muntaner and Lynch, 2020). In this regard, Wright (2015) described that gaining economic benefits from others is a form of exploitation in society. Our results conclude that SES brings exploitation and domination among students. For instance, the dominant attitude of the students is due to high SES, and these students have access to the EMIS.
---
Conclusion
The study aimed to explore the SEI regarding EMIS in rural area students at the middle school level. Our research explores that the SEI present regarding EMIS and household wealth and income brings unequal educational learning among different schools in China.
Moreover, family wealth and SES also affected students' educational learning in school at home. Family wealth and SES-based exploitation are present in the EMIS among male and female students. Household wealth is significant for the EMIS, and it is recommended to future researchers that a quantitative study should be conducted to measure the exact facts and figures of the amount for the EMIS. The statistical outcomes of SEI research may predict a spatial solution to overcome this problem considerably. However, only a few primary research have been undertaken on the SEI and EMIS in China, and this research is very limited to the schools of Hainan districts as well as not generalizable to the whole Chinese schools. Legal, policy, and information technology-based measures should be arranged for the male and female students as well as data quality and availability for low SES students to overcome the exploitation among rural schools' students.
Significant in this handle will be bringing the differences displayed in society into the classroom by utilizing social diversification of the staff and the substance of the educational program and to lock in and raise the accountability of all people, communities, and instructive organizations included. Research can offer vital experiences for all performing artists included.
---
Practical recommendations
1. Information technology-based data quality and availability for low SES students are mandatory at the middle school level. 2. It is recommended that exploitation could be overcome among rural students if the government provides equal opportunities for access to the EMIS. 3. This study does not generalize to the whole population of China because it is limited to a few schools, not all schools in China.
4. The correlational analysis could be conducted between SEI and EMIS direction for rural Chinese schools.
---
Data availability statement
The original contributions presented in this study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
---
Ethics statement
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
---
Author contributions
The author confirms being the sole contributor of this work and has approved it for publication.
---
Conflict of interest
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
---
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 36,997 | 1,396 |
0ec2e750c336fd98d49454af94930897aa6871bd | Gender, religion, and sociopolitical issues in cross-cultural online education | 2,015 | [
"JournalArticle"
] | or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal. If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the "Taverne" license above, please follow below link for the End User | Introduction
Given the globalization of health professions education (Schwarz 2001;Harden 2006;Norcini and Banda 2011), health professions educators need to pay attention to cultural differences and values, and the events that shape them. If people feel it is inappropriate to bring their identity or ideological background into educational environments, students may remain "physically and socially within…a culture that is foreign to, and mostly unknown, to the teacher" (Hofstede 1984), and teachers' cultural assumptions will prevail. The term 'cultural hegemony' describes this power of a dominant class to present one authoritative definition of reality or view of culture in such a way that other classes accept it as a common understanding (Borg et al. 2002;Gramsci 1995). Thus, an implicit consensus emerges that this is the only sensible way of seeing the world. Groups who present alternative views risk being marginalized, and learning may suffer (Arce 1998;Monrouxe 2010;Hawthorne et al. 2004). Therefore, leaders of cross-cultural health professions education need to avoid inadvertently encouraging learners to leave their cultural background at the classroom doorstep (Beagan 2000). The term cross-culturalism refers to exchanges beyond the boundaries of individual nations or cultural groups (Betancourt 2003) as opposed to multiculturalism, which deals with cultural diversity within a particular nation or social group (Burgess and Burgess 2005). This research applies the concept of cross-culturalism to faculty learning and developing a leadership community of practice (Burdick 2014).
This research is conceptually orientated towards the critical theory research paradigm (Bergman et al. 2012) and the concept of 'critical consciousness. ' Kumagai and Lypson argued that cultural education in medicine must go beyond traditional notions of 'competence' (Kumagai and Lypson 2009) to reflective awareness of differences in power and privilege in society, and a commitment to social justice (Freire 1993). To avoid tacitly imposing cultural assumptions, faculty need to facilitate diverse viewpoints. The ability to do so is most important in online education due to its lack of nonverbal communication and emphasis on written learning (De Jong et al. 2013). Discourse theories also fall within the scope of critical theory. Stemming from the parent disciplines of linguistics, sociology and psychology, this family of theories holds that language and other symbols and behaviors express identity, culture, and power (Hajer 1997). Those symbols and signs reflect the order of society at a micro-level, which in turn reflects social structure and action at a macro-level (Fairclough 1995;Alexander 1987). Discourse theories provide heuristics, which can be used to explore relationships between power, privilege, and identity.
Our research question was: How do participants' sociopolitical backgrounds enter online discussions focused on health professions education and leadership to generate critical consciousness? We selected the Foundation for the Advancement of International Medical Education & Research ® (FAIMER ® ) as the setting because its purpose is to develop "international health professions educators who have the potential to play a key role in improving health professions education at their home institutions and in their regions, and ultimately help to improve world health" (FAIMER September 24, 2013). This group of individuals, participating in communal activity, and continuously creating a shared identity by engaging in and contributing to the practices of their communities (Norcini et al. 2005) forms a community of cross-cultural practice (Burdick et al. 2010).
---
Methods
---
Educational setting and participants
The FAIMER Institute (Burdick et al. 2010;FAIMER September 24, 2013;Norcini et al. 2005) provides a 2-year fellowship, which each year develops a cohort of 16 mid-career health professions faculty from Latin America, Africa, the Middle East, and Asia to act as educational scholars and agents of change within a global community of health professionals. There are 3and a 2-week residential sessions 1 year apart in Philadelphia and two 11-month online discussions conducted via a list serve. Both formal and informal meetings during the residential sessions foster cross-cultural understanding by encouraging fellows to share information about their ethnicity, religion, political influences, food, dress, and language. Respect for differences is supported by structured 'Learning Circle' activities (Noble et al. 2005;Noble and Henderson 2008) and sessions covering a range of topics related to education and leadership.
Internet connectivity is problematic in remote areas, so a list serve is used for online discussions. These discussions had two major elements in 2011-2012, when this study was done. First, Fellows reported progress on educational innovation projects they had implemented at their home institutions with the guidance of faculty project advisers. Second, teams of 5-6 current Fellows selected topics, and then collaboratively designed and implemented six 3-week e-learning modules to deepen their health professions education and leadership expertise. Faculty e-learning advisers, mainly from the U.S., and an alumni faculty adviser facilitated the online discussions, whose participants included 32 first and second year Fellows and any of the 150 program alumni who wished to take part. The list serve also provided an informal resource and social support network for Fellows (e.g., congratulations for professional or personal milestones; condolences on personal or national tragedies; holiday greetings). To help those were not native English speakers, had limited time, or were using mobile devices with limited editing functions, Fellows were encouraged to post short comments and not be overly concerned with English grammar. Fellows were required to post "at least one substantive comment that advances the topic" during the e-learning modules, but were not given any specific guidelines to deliberately post cross-cultural comments.
---
Methodology
It has been argued that qualitative research is of good quality when epistemology, methodology, and method are internally consistent (Carter and Little 2007). Located within the critical theory paradigm (Lincoln et al. 2011) this research had a subjectivist epistemology. Discourse theory holds that our words are never neutral; each has a historical, political and social context (Fiske 1994). Researchers use their 'critical reflexivity' to explore the relative value of different subject positions. Critical discourse analysis methodology allows them to explore dialectical tensions within participants' written language. We now describe the methods we used to do that.
---
Critical reflexivity
ZZ, a FAIMER Institute Fellow from Pakistan, was educated as a physician in Pakistan, trained as an Internist in the United States, returned to academic medicine in Pakistan, and 10 years later immigrated to the United States. PM is a U.S. faculty member of the FAIMER Institute with extensive experience of academic leadership development involving gender and minority participants (Morahan et al. 2010). DV, RN, and TD (from the Netherlands, Canada, and U.K.) are extensively involved with cross-cultural education and one (TD) has published on critical discourse (Dornan 2014). All authors had extensive experience of online education. ZZ 's cross-cultural experience and understanding of participants' situations inevitably influenced her interpretation of posts to the list serve. In order for this background to serve as a resource to the project, her co-researchers, including PM who is one of the residential FAIMER faculty advisor, joined in an explicit, conscious process of critical reflexivity, reading data, joining periodic Skype calls, commenting on documents, emailing reflexive comments to one another, and helping each other identify their preconceptions. PM contributed the perspective a of faculty advisor involved with the list-serve.
---
Identification of text for analysis
ZZ compiled all posts to the list serve between August 1, 2011 and August 1, 2012 related to the topics of the e-learning modules, social posts, information requests, and spontaneously generated discussions (but not congratulatory posts, as they consisted of single words or short phrases like "Congratulations"; "Well done") into a 1286-page document. She used her reflexive understanding of the posts to identify those which referred to sociopolitical issues, including religion and gender. Guided by this initial review, the authors compiled a list of keywords and used them to text-search the document to identify any text missed in the first pass. The words were: Terror(ism), Liberal(ism), Conservat (ism), Religion, Islam, Hinduism, Buddhism, Christian, Eid, Christmas, New Year, Chinese New Year, Diwali, Basant, Easter, Carnivale, Lent, Passover, Female, Women, Democra(cy), Dictator(ship), Multicultural(ism), and Diversity. ZZ ensured that entire posts, including associated back-and-forth dialogue between participants, were included, checking with another author (PM) who had actively participated in the discussions. The posts containing these concepts were compiled into an 11-page transcript.
---
Methodological framework
The content analysis drew insights and analytical tools from critical discourse methodology, which is consistent with the critical paradigm in which this research was conducted. Discourse theory holds that our words are never neutral: each has a historical, political and social context (Fiske 1994). Qualitative analysis can identify connections between texts and social and cultural structures and processes (Fairclough 1995). Gee specified features of the structure and content of text, which identify how social structures and processes influence social action (Gee 2014) and said they could be combined with a general thematic analysis not rooted in any particular linguistic methodology (Gee 2004).
---
Analytical procedures
The researchers used analytical tools developed by Gee (2014) to explore how language built identities, relationships, and the significance of events. They all read the 11-page transcript, searching systematically for the 'situated,' or contextual, meaning of words, identifying typical stories that invited readers or listeners to enter into the world of a writer, looking beyond what contributors were saying to identify what their discourse was 'doing,' and exploring how metaphors were used. They worked independently of one another, highlighting material of interest and annotating them with marginal comments. They exchanged and discussed comments to identify and explore areas of agreement and disagreement. ZZ kept notes about the discussions, archived the comments into a single dataset, and maintained an audit trail back to the original data. She then wrote the narrative of results, proceeding from description to interpretation to explanation while constantly comparing these explanations to the original textual materials. The other authors contributed their reflexive reactions to the evolving narrative of results.
---
Results
Although FAIMER's mission includes fostering cross-cultural education, \1 % of the text (11 pages) was explicitly sociopolitical. Participants from 16 countries in Africa and the Middle East (Ethiopia, Nigeria, Kenya, Cameroon, Egypt and Saudi Arabia), Latin America (Mexico, Colombia, Chile), Asia (India, Sri Lanka, Pakistan, Bangladesh, China, and Indonesia), and the United States, contributed to the sociopolitical discussions. They contributed posts, typically in response to events in their home countries, which did not necessarily relate to the topics of the formal discussions. In other words, the geo-political contributions appeared spontaneously, without a specific request by faculty facilitators. These conversations soon petered out for several reasons. There was limited back-and-forth dialogue between an initiating participant and other participants, which limited the depth of the discussions. Posts were greeted not with positive or negative responses, but with silence, and faculty did not ask for more information or build on what had been said. Within the limited discussions that did take place, we identified four strands (parts of conversation within an email thread). Participants discussed experiences related to political events in their countries (political strand); highlighted gender issues (gender-related strand); discussed religion in their home countries (religion-related strand); and offered glimpses into the impact of cultural factors on their lives (general cultural strand). The following paragraphs elaborate those four topics, and Table 1 provides examples of specific posts.
---
Political strand
Political text concerned two main topics: terrorist attacks in India and Pakistan, and the Arab Spring in Egypt. There were two additional posts (from Egypt and Saudi Arabia) about local governments fostering progress and a view from the U.S. on the value of democracy. Arab Spring Egyptian woman chronicling her lived experiences through the Egyptian revolution using the metaphor of childbirth South American participant providing global context of events and encouragement using the metaphor of breast feeding "When I gave birth to my kids, I went through a normal delivery, and refused to take pain killers…I wanted to experience labor pain, which is unbearable; yet I enjoyed every single moment of it…with all those intermingled feelings of suffering, curiosity, serenity, fear, happiness, just waiting for the moment of listening to the first cry". She metaphorically then linked child birth to the electoral process: "Today, while I was impatiently waiting for announcing Egypt's first civil president, the same feeling was projected on me: Egypt was giving birth…very painful…laborious…" "Well, I think that movement to change the model of government in your country is IMPORTANT FOR ALL OF US (INCLUDING LATINO AMERICA) because that kind of change has effects in all middle east country (at same manner that the movement to fall the dictator), effects in economics fields around the world, effects in the way to reorganize and how to obtain a common view of your country where are different points of view about it (that is a common situation in a lot of countries around the world)…So the problem is for all Egyptians not only for the president and his government and if the homework is well done this condition could be a wave more bigger than the last and I hope that it be great. All my prayers for you and your country in this new endeavour. And the image about the pain when the women had given birth could be compensated with the image when the newborn goes to her mamas to take breastfeeding (what a lot of happiness!!! between both)…" Participant from U.S reflecting on western roles of men and women "After I selected the (4) employees, I realized the trouble of having (4) females who are trying to prove themselves in a very masculine culture. Competition was as evident as the sun from first day…and it was hell. Complaints everyday…unhealthy climate, poor relationships, poor communication…the [good of the] unit was the last thing they ever thought of considerably".
"Western culture has evolved more and more into a self-directed, self-centered, individualistic culture of science, savage capitalism and alpha male/alpha female thinking."
Discussing differences in east-west health care practices-CDA tool: activities conforming to social norms or routinization Participant discussing examining women "Exposure of body parts is not allowed or only minimal exposure is allowed (e.g. in UK
we were trained to examine the patient with tops off so that both breasts, chest and axillae could be properly examined. In [my country], patients will only allow the affected breast to be examined and despite request will not allow the contralateral breast to be examined. Men cannot do gynecological examination on women even in an emergency."
"Asking to take off clothes and wear a gown may be considered a norm in one society but a totally unacceptable behavior (or request by a doctor-even with the best of intentions) in another society of culture. We do come across such incidents in our conservative societies and this does conflict with what we were taught (and practiced) in the West."
Religionrelated discourse Participant view on impact of religion in guiding professional outlook Although the "charter of professionalism" started with Hippocratic oath, it is right that most of the religions have their own versions. As has been mentioned there are Hindu religion guidelines on ethics (selfless dedication to preservation of human life) as well as Chinese (skill with benevolence, the persons who undertake this work should bear the idea of serving the people of the community/world). Although Quran is taken as the main guidance book for all ethics in Islam, still the first written book on medical ethics was way back in ninth century when Ishaq bin Ali-Rahawi wrote the book "Adaab Al-Tabib" (Conduct of a Physician) (854-931 AD). Al Razi (Rhazes) is also well-known in the world of ethics as far as muslim ethic are concerned. Maimonedes is a well known name in Jewish ethics. Percival's "Medical Ethics" was published in 1794 and AMA code of medical ethics in 1847, and so on. So most of the societies and religions have their contribution to the field of ethics (and for us professionalism as well). It is nice to hear so many different views on how professionalism is perceived in different corners of the world. Although, overall the main principles of do no harm, do good, justice, altruism and patient autonomy are part of all cultures however some subtle differences still remain (some serious)
Excerpts are part of a back-and-forth dialogue Gender, religion, and sociopolitical issues in cross-cultural…
---
Terrorism
As shown in Table 1, a participant from India broke into the on-line discussion by announcing a terrorist bomb attack. A participant replied empathically that such events are part of normal life in Pakistan. Then participants who had experienced bomb blasts or other forms of terrorism due to the Tamil guerrilla war and drug-related violence in South-America joined the discussion. As participants contributed their experiences, geographic borders became irrelevant. Participants wrote of terrorism as anti-social behavior; a life of living with terror; lack of safety; vigilance; not allowing oneself to be terrorized; life going on despite bomb blasts; hopes of terrorism ending, and peace returning. The text in Table 1 shows that participants did not comment on socioeconomic and political factors contributing to terrorism and relevant to healthcare. Terrorists were characterized as radicalized zealots who do not deserve sympathy or understanding: 'Thankfully, except for the person who was carrying the bomb, no one else was injured.' The net effect of this conversation was to create solidarity between participants who were potential victims of terrorism and emphasize the "otherness" of terrorists, but it did not relate the terrorism discussion to medical education.
---
The Arab spring
In a second part of this strand, vivid metaphors of childbirth and breastfeeding described the local political environment during the Arab Spring (Table 1). The metaphors gave readers a unique window into the life of someone they knew, who was now caught up in an uprising that held the world's attention. A South American picked up on the metaphor, expressed support, and expressed opinions about social change. Later, a participant from the Middle East wrote that "Boundaries are boundaries-they are there to define the environment and mobilizing them is not always a choice" and asked "is it always feasible especially if it requires moving boundaries and making it safe?" A U.S. faculty participant reminded participants of a debate about democracy versus dictatorship during another module but back-and-forth dialogue did not result. The conversation explored differences in Fellows' political environments but did not analyze their relevance to medical education.
---
Gender-related strand
Table 1 contains example text from a conversation about gender issues in treating women patients, which began during an e-learning module on Professionalism. Male and female participants participated in a candid and uninhibited way, describing social norms in their different countries. A participant from Bangladesh wrote that "Shaking hands is culturally and religiously governed, male doctors usually don't shake hands with women patients, they exchange salam (Assalamu Alaikum-peace be upon you!). But it is not mandatory.
Our present [female] Prime Minister Sheikh Hasina shake hands with all, but previous [female Prime Minister] Begum Khaleda Zia shakes hand only with ladies! So there is difference in same culture!" Participants from many countries discussed cultural restrictions imposed by male leaders to prevent women from receiving adequate medical care. Participants from India, Pakistan, Saudi Arabia and Egypt shared differences in physical examination of women patients (Table 1): "Exposure of body parts is not allowed or only minimal exposure is allowed (e.g. in UK we were trained to examine the patient with tops off so that both breasts, chest and axillae could be properly examined. In [my country], patient will only allow the affected breast to be examined and despite request will not allow the contralateral breast to be examined. Men cannot do gynecological examination on women even in an emergency." Another participant wrote, "Asking to take off clothes and wear a gown may be considered a norm in one society but a totally unacceptable behavior (or request by a doctor-even with the best of intentions) in another society of culture. We do come across such incidents in our conservative societies and this does conflict with what we were taught (and practiced) in the West."
In other posts, one participant offered a view about women physicians saying: "In India specially, the attire is important-at the hospital such as ours the female residents cannot come in skirts etc.-not as a rule but as an unwritten norm." Women's rights were touched on briefly: "USAID is also funding many projects on gender equality in Pakistan and a lot of work is being done by Pakistani females in this regard. A great example of how they are succeeding in their mission is that of one Pakistani Film producer, Sharmeen, who received an Oscar award for her film 'Saving Face' a few days ago. This film is regarding women who were disfigured because someone threw acid on their faces. Sharmeen brought this to the attention of the world through her film and this film also earned her an Oscar award, first time any Pakistani has won this award. Yesterday Pakistani parliament passed a law that will now lead to fine of one million rupees and life sentence or death sentence to any one who would carry out such a brutal act." Other posts touched on women trying to make their mark in a 'masculine' work environment.
Taken as a whole, the discussions identified and compared social norms in different cultures, exploring a spectrum of stances, from conservatism to liberal feminism. Explicit links were made to medical education but the relevance of the discussion was often left implicit.
---
Religion-related strand
Some participants wrote of the influence of religion-the Muslim, Hindu, Buddhist or Taoist faiths-on their professional identities. 'God' and 'Allah' were mentioned on several occasions, either in social posts or in the Professionalism e-learning module. The Muslim faith was discussed more frequently than other religions; participants emphasized the significance of moderation and how the Islam religion preaches "never be radical or extreme." One participant described Buddhism as preaching "ethical behavior which is compassion, loving kindness, the giving up from self-centeredness and greed." Another described the Hindu oath from fifteenth century BCE in the context of medicine: "the basic expectation from a physician is 'selfless dedication to preservation of human life', sometimes even at the cost of one's life!" A participant from China discussed how he related with the ancient Chinese mantra of "8 Chinese characters (医为仁术, 济世为怀), and that it means that, 'Medical work is a kind of skill with benevolence, the persons who undertake this work should bear the idea of serving the people of the community/world in their mind'. This has been recognized as the standard for the health care workers in ancient China, and is still mentioned today." The pattern noted in the previous strand, of exchanging experiences and norming, was again apparent, but in-depth exploration of the relevance of those cross-cultural issues to medical education was lacking.
---
General cultural strand
Posts during the Professionalism e-learning module addressed the topic of primary socialization. One participant posted about "the process of being raised by the origin family, since, we see and understand the world by what they do and convey to us and share concerning their values. All those values they have are, dialectically fruit of the sociocultural and political system." Another participant used capital letters to emphasize the significance of the Asian culture of respect: "the deep rooted culturally driven perception of RESPECT and the socially rejected CRITICISM against hierarchy, where feedback could be perceived as disrespect."
Participants shared the "insider" view of culture in their countries, discussing what an "outsider" would find strange if they did not share the knowledge and assumptions that render communications and actions natural and taken-for-granted by insiders. For example, participants noted that in some of these countries, especially in rural areas, a paternalistic doctor and patient relationship is the norm.
---
Discussion
---
Principal findings and meanings
The most striking finding of this research was not what was present in the data, but what was absent. A thorough search of a large corpus of posts to a cross-cultural discussion forum found that \1 % of the text addressed cross-cultural issues. More detailed analysis showed that, even when cross-cultural topics were introduced, participants' responses to them tended to be rather muted. When more lively discussions took place, superficial comparisons of social norms, and solidarity between participants, were more likely to emerge than an exploration of how contrasting cultural perspectives illuminated the practice of medical education. Links between cross-cultural issues and the FAIMER curriculum were rarely made. That is not to denigrate the importance of telling stories, whose value is increasingly recognized (King 2003) because they lead to better understanding of other peoples' lives, which may foster cultural tolerance.
The silence which greeted some posts may be an example of 'situational silence,' in which institutional expectations constrain participants from responding (Lingard 2013). It may also signify cultural hegemony, when dominant cultural expectations make it different for people to identify themselves with positions that deviate from expected norms. Under those conditions, the discourse of faculty development may be restricted to uncontroversial subject matter (Lingard 2013;Dankoski et al. 2014). It is noteworthy that the mostly U.S. FAIMER faculty made very few contributions (fewer than 10) to the cross-cultural discussions. Whether this faculty 'silence' was related to cultural hegemony or lack of facilitation skills remains to be explored (Dankoski et al. 2014).
---
Relationship to other publications
Considerable theory and research show that cultural exchanges as part of curriculum are essential for transformative learning because they disrupt fixed beliefs and lead people to revise their positions and reinterpret meaning (Teti and Gervasio 2012;Kumagai and Wear 2014;Frenk et al. 2010). Otherwise, cultural hegemony imposes powerful influences on what and how people think about their society (Teti and Gervasio 2012). The role that silence, humor and emotions play in enhancing or inhibiting transformational learning (Lingard 2013;Dankoski et al. 2014;McNaughton 2013) has been little studied in crosscultural health professions education settings. Transformative learning is the cognitive process of effecting changes in our frame of reference-how we define our worldview where emotions are involved (Mezirow 1990). Adults often reject ideas that do not correspond to their particular values, so altering frames of reference is an important educational achievement (Frenk et al. 2010). Frames of reference are composed of two dimensions: points of view and habits of mind. Points of view may change over time as a result of influences such as reflection and feedback (Mezirow 2003). Habits of mind, such as ethnocentrism, are harder to change (Mezirow 2000). Transformative learning takes place by discussing with others the "reasons presented in support of competing interpretations, by critically examining evidence, arguments, and alternative points of view" (Mezirow 2006). This learning involves social participation-the individual as an active participant in the practices of social communities, and in the construction of his/her identity through these communities (Wenger 2000). When circumstances permit, transformative learners move toward a frame of reference that is a more inclusive, discriminating, self-reflective, and integrative of experience (Mezirow 2006).
Emancipatory learning experiences must empower learners to move to take action to bring about social and political change (Galloway 2012), therefore, in designing transformative learning, simply mixing participants from different cultures or including a topic addressing ideological backgrounds of participants may not be enough (Beagan 2003;Kumastan et al. 2007) to foster critical consciousness. While information and communications technology has enabled globalization of health professions education, several factors impact outcomes. The inhibiting power of cultural hegemony can make participants hesitate to interrupt curriculum-related discussions and contribute cultural observations. Participants' culture or media preference, and their individualist and collectivist cultural traits can also affect communication styles (Schwarz 2001;Al-Harthi 2005). Pragmatic issues also play a role, such as participants' previous experience with using online settings for learning, professional development, or communities of practice (Dawson 2006).
On a facilitator's part, lack of confidence in facilitating cross-cultural discourse, especially in the online environment, can also adversely impact such discourse (Dankoski et al. 2014). Recent reports note the need for training of both faculty and learners to let go of the concept of objectivity, scrutinize personal biases, acquire skills to "make the invisible visible" (Wear et al. 2012) and unseat the existing hidden curriculum of cultural hegemony. Faculty need to find the balance between task completion and discussion of 'stories,' and acknowledge and take advantage of the tension between the opposing discourses of standardization and diversity (Frost and Regehr 2013).
---
Limitations and strengths
One factor that likely affected the cross-cultural discourses in this study was the perceived safety of disclosure. This may be particularly pertinent in the online setting, where current participants did not personally know all Fellows, and where privacy and security cannot be guaranteed. Fellows from two countries, whose governments are widely thought to be authoritarian (but not fellows from other countries), told us they were fearful of putting sensitive topics on the list serve due to government surveillance and IT monitoring, however this was limited to Fellows from two counties. We were also limited to the voices appearing in the online discussion; there may have been additional communication outside the list serve (e.g., personal emails between participants and faculty). Participants may more likely support and repeat mainstream stories of experiences common to many, while they may not share stories of vulnerability. Pragmatic group level usability issues, such as information overload and challenges in accessing the list serve, may also have lowered frequency of posts; such parameters are known to affect discourse structure and sense of community (Dawson 2006). Useful future research could include in-depth interviews seeking to understand why some participants felt comfortable sharing information about their lives while others did not, and exploration of the impact of culture and the online technology on this participation. Though instruments have been developed to measure participants' global cultural competence (Johnson et al. 2006;Kumastan et al. 2007), sense of community (Center for Creative Leadership 2014), and classroom community strength (Dawson 2006) Kumas-Tan's work shows that current instruments measuring cultural competency ignore the power relations of social inequality (Johnson et al. 2006;Kumastan et al. 2007). This would add another dimension to future research. Additionally, we realize that technology itself is a cultural tool; while not the focus of this study, the results, together with other studies we are conducting, are providing useful information for designing further studies to explore this issue.
While we did not attempt an exhaustive documentation of the cross-cultural discourses over years, the discourse over a 1-year period was sufficient to provide initial insights. This report provides a base line for us and others studying the nature of cross-cultural interactions in professional community of practice settings.
---
Implications for health professional educators
These observations lead to fundamental questions: Should a person's cultural background or current events in his or her home country be brought up in an online e-learning environment for faculty development and fostering a professional community of practice? Is it possible to do this in an online discussion, or should this be left to face-to-face learning activities? What has it to do with health professions education? Is this a distraction for other faculty? Should learning environments maintain cultural hegemony by limiting such discourse? Should faculty actively facilitate or not?
If we conclude that cultural issues should be addressed in online cross-cultural discussions, then we need to look at the depth of these discussions; in our sample, they remained non-analytical and relatively superficial. Future interventional research could include addressing how to foster discussions about participant social identity (Burford 2012), the impact of doing so on learner engagement, and the facilitation skills needed to provide a safe environment for such discussions.
While we may be able to keep a group of learners 'on task' by prescribing cultural hegemony, we may miss a critical opportunity to transform the frames of reference of both learners' and educators' (Frenk et al. 2010) and to 'unmask illusions of pure objectivity' (Wear et al. 2012). Letting go of the need to keep contributions "culture-free" may empower participants to talk (or write). Moreover, knowing each other's stories makes participants in a teaching/learning setting feel they are part of a group, which can stimulate participation and reduce dropout rates (Tinto 1997). Allowing room for spontaneous stories, such as the terrorist bombings in India or the Arab rising in Egypt, can also help a group understand and accept limited participation from those who may be preoccupied with current events in their countries or lack regular access to the internet because of various conditions.
Openness to sharing cultural perspectives may be an important way to foster cultural competence, a Liaison Committee on Medical Education (LCME) mandated goal for all U. S. and Canadian medical schools (Association of American Medical Colleges, Liaison Committee on Medical Education 2003). Attention to informal discussions in online
---
Conflict of interest None. | 36,478 | 1,033 |
290cac900fb906c21a4c4ef6682259c8b82978dd | Association between socioeconomic factors and unmet need for modern contraception among the young married women: A comparative study across the low- and lower-middle-income countries of Asia and Sub-Saharan Africa | 2,022 | [
"JournalArticle",
"Review"
] | Modern contraceptive methods are effective tools for controlling fertility and reducing unwanted pregnancies. Yet, the unmet need for modern contraception (UNMC) remains high in most of the developing countries of the world. This study aimed to compare the coverage of modern contraceptive usage and the UNMC among the young married women of low-and lower-middle-income countries (LMICs) of Asia and Sub-Saharan Africa, and further examined the likelihood of UNMC across these regions. This cross-sectional study used Demographic and Health Survey (DHS) data on family planning from 32 LMICs of South Asia (SA), Southeast Asia (SEA), West-Central Africa (WCA), and Eastern-Southern Africa (ESA). Multilevel logistic regression models were used to investigate the relationship between UNMC and women's socioeconomic status. Out of 1,00,666 younger married women (15-24 years old), approximately 37% used modern contraceptives, and 24% experienced UNMC. Regionally, women from SA reported higher modern contraceptive usage (44.7%) and higher UNMC (24.6%). Socioeconomic factors like-higher education (in SA and WCA), unemployment (in SA and ESA), no media exposure (in SA and ESA), and higher decision-making autonomy (except SEA) showed positive and significant association with UNMC. Poorest households were positively associated with UNMC in SA and ESA, while negatively associated with UNMC in SEA. UNMC was highly reported among the SA young married women, followed by WCA, SEA, and ESA regions. Based on this study findings, versatile policies, couples counseling campaigns, and community-based outreach initiatives might be undertaken to minimize UNMC among young married women in LMICs. | Introduction
Unwanted pregnancies and unsafe abortions can seriously affect any sexually active women and have negative impacts on women's personal and conjugal life, their families, and societies. Due to unsafe abortions, thousands of women die and millions more suffer long-term reproductive problems, including infertility. The incidence of unwanted pregnancies and unsafe abortions is likely to continue to increase until women's need for modern contraception is met [1]. To estimate women's need for family planning services (i.e., modern contraception) and assess women's ability to obtain their reproductive desire, recently the concept 'unmet need for modern contraception' (UNMC) has been introduced [2]. Globally this important tool is widely used for advocacy, developing family planning policies, and implementing and/or monitoring family planning programs [3]. Conceptually, UNMC captures those sexually active or fecund women who are not using modern contraceptive tools but intend to conceive a child later, or to abstain of having any more children [3,4]. Since the degree of UNMC is one of the basic indicators for evaluating the effectiveness of family planning program in any country, women having UNMC are the logically important targets for such program management [3,4].
In 2012, the global community launched the Family Planning 2020 (FP2020) initiative at 'London Summit for Family Planning' which is built on the principle that all women, regardless of their place of residence and economic status, should enjoy their human right to access safe and effective, voluntary contraceptive services and commodities [4]. Since then, the FP2020 movement has focused on 69 poorest countries, and consequently, the global coverage of modern contraceptives among reproductive age married women has been increased by 30.2 million from 2012 (270 million) to 2016 (more than 300 million) [5]. However, the usage of modern contraceptives among married women was increased slowly in Asia (from 51 percent to 51.8 percent) between 2012 to 2017, compared to their counterparts in the African region (from 23.9 percent to 28.5 percent) [6]. On the other hand, the overall UNMC was reported to be 21.6 percent among the FP2020 focused countries in 2017 with a coverage of over 25 percentage in most of the Southern Asian and Sub-Saharan African countries [6]. This higher percentage of UNMC indicate a significant barrier in achieving Sustainable Development Goal 3.7 (SDG-3.7). The high dominance of UNMC backpedals the achievement of a higher proportion of demand satisfied by modern methods, which is one of the major health related indicators of the SDG (SDG Indicator 3.7.1) [5,7].
Efforts to reduce the extent of UNMC effectively require the region-wise assessment of the socio-demographic characteristics of the population and the identification of underlying factors that directly influence unmet needs [8,9]. Though some country-specific studies [8,10,11] and much earlier literature [2,[12][13][14] reported that different socio-economic factors of women; limited choice and access to family planning methods; fear of side effects of using contraceptives; child marriage; urban-rural disparities; spousal age difference; and religious or cultural constraints, etc. have the potentiality to shape the level of UNMC among reproductiveaged women [8,11,12,15]. Moreover, some of these studies included all sexually active women regardless of their age and marital status [6,15]. But compared to the older married women, it has been reported that young married women (aged 15-24 years old) experience disproportionately higher levels of UNMC owing to their distinct fertility preferences, (i.e., partner's desire of more or male children, avoiding older-aged pregnancy complications, and persistent high child mortality, etc.), and such preferences varied from culture-to-culture [2,12,13,15]. Hence, the actual association between different socio-economic factors and UNMC, especially for the younger married women, might not be reflected properly after including all reproductive aged women.
The percentage of UNMC is still high among the younger married women of low-and lower-middle-income countries (LMICs), particularly from South Asian, Southeast Asian, and Sub-Saharan African region [2]. But rarely any study has explored and compared the prevalence and associated factors of UNMC among the young married women of these regions. Though Ahinkorah et al. (2020) [16] investigated on the socio-demographic variations in unmet need for contraception among the younger aged women, that study was conducted regardless the marital status, confined to only Sub-Saharan African region, and considered any types of contraception. Such limitations of the existing literature impede international comparability and manifest the necessity of region-by-region investigation of UNMC and its associated socioeconomic factors among the young married women of LMICs. On the other hand, a comprehensive comparative research examining the present prevalence of UNMC and identifying the associated factors will assist policymakers in the individual regions to adapt and implement successful family planning programs based on their respective cultural contexts and the socioeconomic factors. Aiming these issues, this comparative study investigated the coverage of modern contraceptive usage and UNMC among the young married women, and further identified the socioeconomic factors that were associated with UNMC in the LMICs of Asian and Sub-Saharan African regions.
---
Methods
---
Data sources
Data from latest Demographic and Health Surveys (DHS) with available information on family planning conducted in 32 LMICs of Southern Asia and Sub-Saharan Africa (from 2014 onwards) were used. Five countries from South Asia (Afghanistan, Bangladesh, India, Nepal, and Pakistan), four from Southeast Asia (Cambodia, Myanmar, Philippines, and Timor-Leste), 13 from West and Central Africa (Angola, Benin, Cameroon, Chad, Congo Democratic Republic, Ghana, Guinea, Liberia, Mali, Nigeria, Senegal, Sierra Leon, and Togo), and 10 countries from East and Southern Africa (Burundi, Ethiopia, Kenya, Lesotho, Malawi, Rwanda, Tanzania, Uganda, Zambia, and Zimbabwe) were included. DHS is publicly available, nationally representative, cross-sectional surveys conducted in LMICs with multistage (usually two-stage) cluster sampling. Along with other information on maternal and child health outcomes and interventions, DHS regularly gather information on family planning and reproductive health. However, detailed administrative procedures, trainings, sampling strategies and methodology of DHS have been described elsewhere [17,18].
---
Study population
This cross-sectional study was limited to only currently married younger women aged 15-24 years old. After excluding the missing information on outcomes or covariates, a total of 100,666 married women, with complete information from 32 LMICs of Asia and Sub-Saharan Africa (SSA), were finally selected for this study (Table 1). Modern contraception methods. Modern contraception methods include contraceptive pills, condoms (male and female), intrauterine device (IUD), injectables, hormone implants, sterilization (male and female), patches, diaphragms, spermicidal agents, and emergency contraception [17]. Prevalence of modern contraceptive usage was determined as the percentage of women of reproductive age who report themselves or their partners as currently using at least one of the modern contraception methods.
---
Measurements
Unmet need for modern contraception. UNMC, the third core indicator of FP2020 initiative, was measured as the percentage of fecund women of reproductive age who want no more children or to postpone having the next child, but are not using any contraceptive method, plus women currently using a traditional method of family planning [3]. Women using any of the traditional methods (like-abstinence, the withdrawal method, the rhythm method, douching, and folk methods) were also assumed to have a UNMC. Again, pregnant women with a mistimed or unwanted pregnancy were also considered in need of contraception.
---
Exposures.
Based on the empirical literature [8,9,19,20], this study considered four variables i.e., educational level, type of earning from work, exposure to media (family planning messages), and household level decision-making autonomy-as the proxy variables to indicate the socioeconomic status of respondents. Educational level was classified as no education, primary, secondary, and higher. Type of earning from respondent's works was categorized into not working, working and paid (cash paid, or in-kind paid, or both), and working but not paid. Exposure to family planning messages via mass media refers to hearing family planning messages via listening radio, watching TV, and reading newspaper for the last few months. 'Exposure to media' was dichotomized by assigning a value of 1 if the respondent heard family planning messages from at least one of the mass media, and 0 if they did not. Women's household-level decision-making autonomy was measured using their responses to four questions that asked who makes decisions in the household regarding obtaining health care for herself, making large purchases, visiting family and relatives, and using contraception. Response categories were the respondent alone, the respondent and her husband/partner jointly, her husband/partner alone, someone else or other. For each of the four questions, a value of 1 was assigned if the respondent was involved in making the decision, and 0 if she was not; then, the values were summed and dichotomized as 'participated' and 'not participated'. And finally, the household wealth index, which was measured by the DHS authority using principal component analysis of the assets owned by households, and the detailed analytical procedures were described elsewhere [21]. The score was categorized into five equal quintiles (poorest, poorer, middle, richer, and richest) with the first, representing the poorest 20%, and the fifth, representing the richest 20%.
---
Controlling variable.
Based on the previous studies [8,9,11,12,15], the following controlling variables were used in the analyses along with the predictor variables: partner was more educated than the wife (yes, no); spousal age difference (less than 5 years, 5 to 9 years, 10 years and more); the number of living children (no child, 1 to 3, 4 and more); whether respondents married before 18 years old (yes, no); and place of residence (urban, rural).
---
Statistical analysis
Frequency distribution and univariate analysis were used to compare the proportion of UNMC with the socio-economic status of respondents. Multilevel logistic regression models with a random intercept term at community-and country-level were used to estimate adjusted odds ratios (ORs), along with 95% confidence intervals (CIs), for the relationship between exposures and UNMC. Models were adjusted for-partner was more educated than the wife; spousal age difference; number of living children; marriage before 18 years old; and place of residence. For all analyses, P < 0.05 was set as the significant level. The complex survey (DHS) design was considered in all the analyses using Stata's 'SVY' command. Data management and statistical analysis were conducted in Stata version 16.1/MP.
---
Result
---
Country-specific coverage of modern contraceptive and percentage of unmet need
Overall, 1,00,666 young married women from Asia and Sub-Saharan Africa were included in this analysis. The mean age (±SD) of the study population was 21.17 (±2.23) years, while their mean age (±SD) at marriage was 17.15 (±2.50) years. The pooled estimate from 32 LMICs showed that about 37% young married women used modern contraceptives and 24% women had UNMC (S1 Table ).
From the country specific estimation, the overall percentage of modern contraceptive usage was higher in South Asia (SA) (44.7%; CI: 43.9% -45.6%), followed by Eastern and Southern Africa (ESA) (42.7%; CI: 41.6% -43.8%), and Southeast Asia (SEA) (36.5%; CI: 34.8% -38.3%) (S1 Table ). Modern contraceptive usage varied from 11.6% (Pakistan) to 54.4% (India) in SA, from 17.9% (Timor-Leste) to 58.1% (Myanmar) in SEA, from 2.3% (Chad) to 19.7% (Ghana) in WCA, and from 23.5% (Burundi) to 58.5% (Zimbabwe) in ESA (Fig 1).
On the other hand, the UNMC was highly reported in SA (24.6%; CI: 24.0% -25.1%), compared to WCA (24.2%; CI: 23.5% -25.0%), SEA (24.0%; CI: 22.6% -25.5%), and ESA (21.5%; CI: 20.7% -22.4%) (S1 Table ). The proportion of UNMC ranged from 20.5% (Bangladesh) to 41.5% (Nepal) in SA, from 14.8% (Myanmar) to 28.5% (Timor-Leste) in SEA, from 15.6% (Nigeria) to 39.5% (Togo) in WCA, and from 11.0% (Zimbabwe) to 32.6% (Uganda) in ESA (Fig 1).
The younger married women of Asia possessed higher socioeconomic status in all the selected aspects than their counterparts of SSA. The percentage distribution of different socioeconomic characteristics of the study population has been displayed in S2 Table. On the other hand, S3 Table showed that the UNMC was significantly higher among the South Asian younger women who had not-paid job (29.1% vs 22.4%), no media exposure (25.4% vs 23.9%), and high decision-making autonomy (27.5% vs 23.0%) than those with paid job, media exposure, and medium decision-making autonomy. In contrast, women from East and South Africa who received secondary and higher level (17.5% vs 24.6%), possessed medium decision-making power (18.7% vs 39.7%), and richest wealth-index (17.2% vs 24.6%) reported significantly lower proportion of UNMC compared to those with no education, high decision-making autonomy, and poorest household (S3 Table ).
---
Socioeconomic factors affecting unmet need for modern contraception
To investigate the associated socioeconomic factors for UNMC, the models (unadjusted and adjusted) of multilevel logistic regression analyses from the four regions and the pooled data have been presented in Tables 2 andS4, respectively. From the adjusted analysis (Model II), SA countries showed that women's secondary and higher education level [AOR: 1.37; 95% CI: .66] were positively and significantly associated with UNMC among the women of WCA. In ESA, while women with high decision-making autonomy had 1.94 times higher odds, women's medium level decision making had 14% lower odds of experiencing UNMC, compared to the women with low decision-making autonomy. Additionally, women's not working status, no media exposure, and poorest wealth index reported positive and significant association with UNMC in ESA (Table 2). However, from the pooled data of 32 low-and lower-middle-income countries of Asia and SSA, the adjusted association revealed that primary education level, secondary and higher education level, not working status, no media exposure, high decision-making autonomy, poorest wealth index had positive association, whereas women's medium decision-making autonomy showed negative but significant association with UNMC (S4 Table ).
---
Discussion
To the best of our knowledge, this is the first comprehensive study that explored and compared the associated socioeconomic factors of younger married women with their UNMC, across the 32 LMICs of Asia and African region. Younger married women from SA region were ahead of their counterparts of SSA regions in terms of modern contraceptive usage. But UNMC was highly reported among the SA young married women followed by WCA, SEA, and ESA regions. Different socioeconomic factors of the study population like-higher educational level (in SA and WCA), not working status (in SA and ESA), no exposure to media (in SA and ESA), and higher decision-making autonomy (in SA, WCA, and ESA) showed positive and
---
Socio-economic factors
Odds ratio (95% CI)
significant association with UNMC. Poorest households were positively associated with UNMC among the women of SA and ESA, whereas it showed negative association with UNMC in SEA and WCA regions. Similar to our study, a recent investigation of World Family Planning (2017) indicated that SA had a higher rate of modern contraceptive use than Africa and SEA [22]. Again, UNMC was found to be higher where modern contraceptive prevalence was low [22], i.e., in WCA, SEA, and ESA regions; which is also similar to our study findings, except for SA. Younger married women hold new norms about family planning and family size due to the development of their empowerment status [19], which can outpace the availability and use of contraceptives [22]. This might be one of the plausible reasons behind the stable or increasing prevalence of UNMC among the younger married women of SA. On the contrary, WCA reported higher UNMC in our study which might be explained due to the high-level usage of traditional contraceptive methods [23], and unaware, unavailability, and cost of modern contraception [24]. Regionally, the women of Africa (Benin 36%, Burkina Faso 27%, Burundi 33%, Cameroon 33.2%, DR Congo 40%, Ghana 34%, Liberia 32%, and Uganda 33%) and Southern parts of Asia (Afghanistan 28%, Nepal 26%, Pakistan 30%, and Sri Lanka 22%) experienced more UNMC than other regions (Southeast Asia and East Asia < 20%) [6]; which was nearly consistent for younger married women of this study.
While exploring the associated socioeconomic factors, the UNMC of younger married women of SA, WCA, and ESA showed positive association with both higher educational level, and high decision-making autonomy. A comparative study of Kerry MacQuarrie conducted in 41 developing countries also reported similar positive relationship between higher educational attainment and UNMC [2], but this outcome is contradicted with the some studies from Pakistan [8], Ethiopia [14], and some African countries [9]. The possible reason behind such contradiction might be the age of study population. The study population of these aforementioned studies [8,9,14] included reproductive aged women (15-49 years), whereas our study as well as the study of Kerry MacQuarrie [2] considered only the younger married women. Even though young women can be educated and aware of contraceptive usage, factors like-pregnancy expectations early in marriage, male child preferences, limited access to modern spacing contraceptives (such as-oral contraceptive pills, intrauterine devices, condoms, and sterilizations, etc.), family resistance to adopt contraceptives, and husband's reluctance on family planning issues etc. can increase their UNMC [25]. NGO conducted yard-meeting, counseling services from family planning workers, and teaching basic family planning education at schools etc. might be effective to eradicate the existing reluctance, resistance, and primitive misconceptions of using modern contraception among the spouses, and other family members.
Similar to our study, one of the studies from Southern Asia [19] showed that reproductive aged women (15-49 years) with higher decision-making autonomy used modern contraceptives frequently and experienced less UNMC. Women's decision-making autonomy greatly depends on their age at marriage. Women marrying at a premature age usually possess lower social standing in the household, whereas later marriage provides a woman proper authority inside the home, ability to negotiate with household members, and strong involvement in decision-making after marriage [26]. So, in many LMICs, when younger women try to raise their voices, especially for their reproductive rights, and try to make a decision regarding her choice of using family planning methods, they experience different types of spousal violence [20]. On the other hand, couples possessing an equalitarian power structure at household and women holding medium level of authority within the home appeared to be more effective in satisfying their unmet contraception demand [27]. That is why, high decision-making authority showed positive association with UNMC among the married young women of SA, WCA, and ESA region in this study. By promoting community-based outreach campaigns and multisectoral programs for family planning focusing on couple's egalitarian decision-making power structure in the household might reduce the level of UNMC [27].
In both SA and ESA, unemployment, and poorest wealth-index were positively associated with the experience of UNMC among the study population. These findings were accordant with some empirical study results [2,8,9,15]. Employed women, as well as, women from the economically advantaged household are usually able to increase their opportunity cost of bearing and rearing a child, compared to the unemployed and poor women [8]. But the scenario is different in most of the low-income countries. Because rearing babies by baby sitters are too much costly and they are not always available in LMICs [28]. In such cases, the mother's sole responsibility of bearing and rearing a child reduces their time devoted to paid work and consequently, they may have to forego their source of income. Thus, unemployed and poor mothers try to avoid the extension of their family size and focus on the cost management of the household. Moreover, compared to the poor families, solvent households possess better access to modern contraceptives and most of the family planning services [8]. Therefore, similar to the studies of Nigeria, Pakistan, and Zambia [9], our study observed an increased likelihood of UNMC among the unemployed women and poor households, compared to the employed and rich ones. Introducing home-craft markets and promoting different micro-finance programs will create more employment opportunities for younger women. Establishing healthcare complex in remote and rural areas will provide better access to family planning services among the underprivileged population. Additionally, Non-Government organizations (NGOs) and local governing bodies should supply modern contraceptives at a low cost to the economically disadvantaged regions.
Consistent with previous studies [8,29], lack of exposure to family planning messages via media was found to be one of the major socio-economic determinants of UNMC in this study. The plausible reasons might be the lack of knowledge about the advantages of contraceptive usages, negative perceptions, and the excessive fear of side-effects of contraception [8,29,30]. A qualitative study from the rural areas of India [25] revealed that young married couples without proper media access have the misperception about the usage of oral contraceptive pills and intrauterine devices. For the last two decades, Governments of LMICs have been implementing a lot of actions to convince people concerning to the efficacy of birth control programs via extensive media campaign, where the messages of celebrities and influential personalities of the society are communicated to people to persuade them about the benefits of family planning programs. But, due to the lack of access to media among rural women, the effort of the government does not seem to be fruitful to change the perception of rural and superstitious people about the side effects of using contraceptives [8,30,31]. However, the access to media has to be increased through intervening programs, and family planning messages, advertisements, and campaigns via mass media should be accelerated. Such campaigns and messages may be helpful to remove the superstitions and fear of side-effects of using contraceptives among rural and lowly educated people. Additionally, this will eventually increase the awareness about their sexual and reproductive health, the acceptability of using the modern contraceptive, and the autonomy in fertility decision making.
The prime strength of this study comes from using the large and nationally representative surveys from 32 LMICs of Asia and Sub-Saharan Africa. So far, this is the first study that has estimated the country-wise UNMC among younger women as well as regionally examined and compared its association with women's socio-economic status. Most importantly, this study was limited to those younger women who were married. Because, by including unmarried women, estimates in some regions may be underrepresented as they (some African and South Asian countries) have limited data on reproductive health for unmarried women, and many unmarried women with sexual experience may feel uncomfortable to report, which could potentially bias the measurements. On the other hand, the study sample was limited to only younger married women aged 15-24, while the data was mainly dependent on the verbal report provided by them. Again, women's perception of wanting the next pregnancy or spacing it may change during the pregnancy, or depend on different circumstances of life. Additionally, the possibility of social desirability bias retains due to the self-reporting nature of the data collection with unknown validity and reproducibility. Finally, as this was a cross-sectional study, it was not possible to make any causal inference but rather only associations.
---
Conclusion
The highest coverage of modern contraceptive usage among the younger married women was reported in SA and the lowest was in WCA. But women from SA and ESA experienced the highest and lowest proportion of UNMC, respectively. In SA, women's socioeconomic factors like-higher education, unemployment, lack of media accessibility, high decision-making autonomy, and poor wealth-index etc. showed positive association, whereas medium decisionmaking autonomy and poor wealth-index showed negative association with UNMC in SEA. High decision-making autonomy increased women's UNMC in both WCA and ESA. Additionally, higher education in WCA, and unemployment, no media exposure, as well as poor wealth-index in ESA were positively associate with women's experience of UNMC; which is a noteworthy contribution to the field of UNMC. However, to achieve Sustainable Development Goals (SDGs) target 3.7, i.e., ensuring universal access to sexual and reproductive health-care services by 2030, the international community must continue the existing campaigns of increasing modern contraceptive usages worldwide. And policy makers of respective LIMCs can implement versatile intervening program to reduce UNMC among the younger married women based on the findings and suggestions elicited in light of this comparative study.
---
All the datasets are available in: https://dhsprogram.com/data/ available-datasets.cfm. | 26,521 | 1,692 |
c927c478d0f1f683a5cbc6b64d8331177abe2eba | The influence of socioeconomic environment on the effectiveness of alcohol prevention among European students: a cluster randomized controlled trial | 2,011 | [
"JournalArticle"
] | Background: Although social environments may influence alcohol-related behaviours in youth, the relationship between neighbourhood socioeconomic context and effectiveness of school-based prevention against underage drinking has been insufficiently investigated. We study whether the social environment affects the impact of a new school-based prevention programme on alcohol use among European students. Methods: During the school year 2004-2005, 7079 students 12-14 years of age from 143 schools in nine European centres participated in this cluster randomised controlled trial. Schools were randomly assigned to either control or a 12-session standardised curriculum based on the comprehensive social influence model. Randomisation was blocked within socioeconomic levels of the school environment. Alcohol use and alcohol-related problem behaviours were investigated through a self-completed anonymous questionnaire at baseline and 18 months thereafter. Data were analysed using multilevel models, separately by socioeconomic level. Results: At baseline, adolescents in schools of low socioeconomic level were more likely to report problem drinking than other students. Participation in the programme was associated in this group with a decreased odds of reporting episodes of drunkenness (OR = 0.60, 95% CI = 0.44-0.83), intention to get drunk (OR = 0.60, 95% CI = 0.45-0.79), and marginally alcohol-related problem behaviours (OR = 0.70, 95% CI = 0.46-1.06). No significant programme's effects emerged for students in schools of medium or high socioeconomic level. Effects on frequency of alcohol consumption were also stronger among students in disadvantaged schools, although the estimates did not attain statistical significance in any subgroup. Conclusions: It is plausible that comprehensive social influence programmes have a more favourable effect on problematic drinking among students in underprivileged social environments. | Background
Alcohol use is a major cause of mortality and morbidity among young people, being implicated in large proportions of unintentional injuries [1][2][3][4], as well as of violent behaviours resulting in homicides and suicides [5,6]. Underage alcohol drinking has been also associated to school drop-out [7], and unsafe sex [3], which in their turn predict poor general health later in life [8]. Studies in the United States, Australia, and Europe have indicated that early onset of alcohol use is a predictor of substance abuse and alcohol dependence in adulthood [9][10][11]. Although most of these behaviours are associated with socioeconomic characteristics among youths [12], little evidence exists in the literature in support of a socioeconomic gradient of alcohol use during adolescence [13]. However, some differences emerge when investigating different drinking dimensions. Some studies among young people have reported a direct relationship between household income and frequency of alcohol consumption [14,15], but an inverse relationship between occupational level of the father and quantity of alcohol consumed on a typical drinking occasion [16]. Also, other studies suggested that low socioeconomic status may be associated with problematic drinking in youth [17][18][19].
Given social differences in profiles of alcohol use and the recognized need to reduce the social gap in the burden of risk factors [20], an evaluation of preventive programmes across social strata is desirable. Since most preventive programmes are delivered at the community level (e.g. in schools) rather than at the individual level, measures of social disadvantage should be assessed accordingly, at the collective level. In fact, recent studies in the United States reported complex associations between community-level indicators of socioeconomic status and underage drinking [21][22][23]. Besides, research has shown that neighbourhood socioeconomic position influences health related behaviours [24,25]. Several potential mechanisms have been hypothesized such as availability of health, social and community support services and provision of tangible support (e.g. transportation, leisure and sporting facilities) [26]. Therefore, it is plausible that the context into which a preventive program is brought will influence its effectiveness.
However, this effect has rarely been considered in the evaluation of school-based interventions against alcohol use, for instance comparing intervention's effects between areas with different social level.
The purpose of the present study was to analyse whether the social environment at the level of the school area affects the effectiveness of preventive school curricula on alcohol use. The EU-Dap (EUropean Drug Abuse Prevention) study was the first European trial designed to evaluate the effectiveness of a new schoolbased programme ("Unplugged") for substance use prevention. Participation in the programme was associated with a lower occurrence of episodes of drunkenness and alcohol-related behavioural problems 18 months after baseline, compared to usual curricula, while average alcohol consumption was not impacted [27,28].
Since the socio economic status of the living environment has been associated with adolescents' educational achievements and health behavior [29,30], we hypothesized a different preventive impact of the intervention in environments with different socio-economic level.
---
Methods
The EU-DAP trial (ISRCTN-18092805) took place simultaneously in nine centres of seven European countries: Austria, Belgium, Germany, Greece, Italy, Spain and Sweden. The research protocol complied with the ethical requirements foreseen at the respective study centres.
---
Experimental Design and Sample
The study was a cluster randomised controlled trial among students attending junior high school in the participating regional centres: one urban community from each involved country (the municipality of Wien, Merelbeke, Kiel, Bilbao, the North-west region of Thessaloniki, and the Stockholm region of Sweden) and three urban communities from Italy (the municipality of Turin, Novara, and L'Aquila). One-hundred and seventy schools were selected on the basis of inclusion criteria and of willingness to cooperate.
Schools were sampled in order to achieve a balanced representation of the underlying average socioeconomic status of the population in the corresponding catchment area. Prior to randomisation schools within each regional centre were ranked by social status indicators and classified, as schools of either high, medium, or low socioeconomic level on the basis of tertiles of the corresponding distribution. This stratification was done independently by each regional centre using the most reliable and recently available data. Different indicators were used (Table 1). Indicators of population's social conditions of the catchment area of the school were used in Greece and Sweden. In Germany, Belgium and in the two Italian centres of Turin and Novara type of school was used, because there is a clear social class gradient in the corresponding school systems. In the remaining regional centres a combination of area and school indicators was used.
Schools in each centre were randomised to either the intervention or a "usual curriculum" (control) group within the socioeconomic stratum.
Students in the intervention group participated in the EU-Dap substance abuse preventive programme consisting of 12 one-hour sessions designed to tackle adolescents' use of alcohol, tobacco and illicit drugs. This new curriculum is based on a Comprehensive Social Influence model [31], and focuses on developing and enhancing interpersonal and intrapersonal skills. Sessions on normative education and information about the harmful health effects of substances are also provided. Details on the curriculum theory base and content have been provided elsewhere [32]. Ordinary classroom teachers were trained during a 3-day course in interactive teaching techniques. Thereafter, they administered the intervention sessions over three months. The protocol of the programme implementation was carefully standardised. Students in the control group received the programme normally in use at their schools, if any.
In October 2004, 7079 students aged 12-14 years (3532 in control schools and 3547 in intervention schools) participated in the pre-test survey. Post-test data were collected in May 2006, i.e. at least 18 months after baseline. Data from baseline and follow-up surveys were matched using a self-generated anonymous code [33], leaving an analytical sample of 5541 students (78.3%). Additional information on the study design and study population has been published elsewhere [34].
---
Data collection and measures
Self-reported substance use, along with relevant cognitive, attitudinal, and psychometric variables, was assessed by an anonymous paper-and-pencil questionnaire, administered in the classrooms without teachers' participation. Students were reassured about the confidentiality of their reports and the anonymous code procedure was explained. Apart from language adaptation, the same questionnaire and assessment procedures were used across all countries and all data collection points. Most questions were retrieved in the "Evaluation Instruments Bank" (http://eib.emcdda.europa.eu/), assessed in 2004. A test-retest evaluation of the survey instrument was conducted during a pilot study [33].
The outcomes of interest in the present analysis were: average frequency of current alcohol consumption, past 30-day prevalence of episodes of drunkenness, intention to drink and to get drunk within the next year, and occurrence of problem-behaviours related to the use of alcohol. The latter occurrence was assessed by asking the students whether they, in the past 12 months, had experienced any of 11 problems, including fighting and injury, because of their drinking. Intentions to drink alcohol or to get drunk within the next year were reported by the students on a 4-point scale ranging from "Very likely" (1) to "Very unlikely" (4). In addition, we explored some individual psycho-social characteristics: perceived school performance, exposure to siblings' alcohol use, and perceived parents' tolerance concerning alcohol drinking. The questions used for the assessment of outcomes and predictors have been fully described in previous reports [34,35].
We dichotomized the frequency of alcohol consumption into "Any current drinking" versus "No current drinking" as well as into an indicator of frequent drinking ("Drinking at least weekly" versus "Drinking less than weekly or not at all"). Also intentions to drink and to get drunk were dichotomized in "Very likely" or "Likely" versus "Unlikely" or "Very unlikely". Since the baseline prevalence of each alcohol-related behavioural problem and of episodes of drunkenness was very low, we collapsed these responses into two dichotomous outcomes of "No alcohol-related problems" versus "Any problem" in the past 12 months, and "No episodes of drunkenness" versus "Any episode" in the past 30 days respectively. Perceived school performance, based on self-comparison of own grades with those of the classmates, was coded as "Worse" versus "As good or better". Exposure to siblings' alcohol use was dichotomized, and students without siblings were considered unexposed to this influence. Perceived parents' tolerance concerning alcohol drinking was dichotomized into "Would not allow me to drink at all" versus "Others". Assessed Table 1 Indicator of social status, number of enrolled schools and students at baseline, by regional centre socio-demographic characteristics included gender, age, school-grade and the family living situation coded as "Living with both parents" versus "Other living situation".
---
Statistical analysis
We performed descriptive statistical analyses to summarize the main characteristics of the study sample. We tested the baseline equivalence by experimental condition of outcomes and predictors of interest separately by socioeconomic level using chi-square tests with the appropriate degrees of freedom.
Odds Ratios (OR) and their corresponding Confidence Intervals (95% CI) were estimated as measure of association between experimental conditions and behavioural outcomes, separately for each socioeconomic level of the school area. A multilevel logistic regression model was fitted to account for the hierarchical structure of the data with one random effect at the classroom level and one at the regional centre level [36]. We tested several established predictors of substance use as potential confounding variables. These included gender, age, family living situation, family alcohol use, perceived school performance, perceived parents' tolerance concerning alcohol drinking, and the baseline status of the behaviour under study. Models were adjusted for variables on which the intervention and control group significantly differed at baseline and for the baseline status of the outcome. We also formally tested for statistical interaction by including in the regression model a cross-product term between the treatment condition and the socioeconomic status indicator, coded in dummy variables. A significant test statistic based on the likelihood ratio test for this interaction term is evidence that treatment effects vary by school socioeconomic level. All analyses were performed using the statistical package MLwiN 2.2 [37]. All outcome analyses were intent-to-treat.
---
Results
The sample consisted of 5541 students, 49.1% of whom were females. Mean age was 13.2 years. At baseline, gender and age distributions differed among social levels (data not shown). Schools in the lowest level had a higher percentage of male and older students. Students in schools of high socioeconomic level were more likely than students in other schools to drink at least monthly (17.2% vs. 14.6%, p = 0.01) and to have intention to drink (43.7% vs. 39.0%, p < 0.01) while students in schools of low socioeconomic level were more likely to report recent episodes of drunkenness (7.0% vs. 4.0%, p < 0.01), intention to get drunk (20.0% vs. 17.6%, p = 0.03), and alcohol-related problem behaviours (4.2% vs. 3.0%, p = 0.02). The only difference among social levels with regard to the considered psychosocial variables was that students in schools of low socioeconomic level compared to other adolescents were more likely to perceive their school performance as worse than average (10.7% vs. 6.1%, p < 0.01).
Figure 1 shows the sample size and the equivalence of some baseline characteristics by experimental condition, separately by socioeconomic level. Within levels of socioeconomic environment we found different distributions between the control and intervention groups for gender, age, family living situation, frequency of alcohol consumption, and intentions to drink and to get drunk. Controls in the lower social level had higher proportions of well-known predictors of alcohol use (male gender, older age and early drinking experience) compared to students in the intervention group.
Missing information at baseline was negligible for any of the assessed characteristic (at the most 2.1%, data not shown).
Participation in the programme was associated with a significantly lower prevalence of episodes of drunkenness and of intention to get drunk, compared to usual curricula, among students attending schools in low socioeconomic context (Table 2). For both outcomes the estimated OR was approximately 0.60.. The same students had an OR of 0.68 of reporting behavioural problems due to their drinking, but this effect was only marginally significant (p = 0.06). Concerning the frequency of alcohol consumption, the estimated effects did not reach statistical significance within sub-groups, but the estimates were consistently lower among students attending schools in disadvantaged contexts. No significant programme's effects emerged for students in schools of medium or high socioeconomic level. Interactions between intervention condition and socioeconomic status at the area level were found to be statistically significant only for intention to get drunk (p = 0.02).
---
Discussion
In a multi-centre trial among European students we found some evidence that the effectiveness of a comprehensive social influence school-based preventive programme on problematic drinking might differ by socioeconomic environment of the school.
The differences indicated a higher preventive impact of the curriculum on episodes of drunkenness and intention to get drunk among students attending schools in a socially deprived context, compared to students in medium or high social context. The effects of the programme on the frequency of alcohol consumption and the intention to drink were weak and not statistically significant in subgroups, in line with results on the whole study sample [28]. However, even for these outcomes the direction of the estimated effects suggested a higher impact of the curriculum in schools in low social context.
The absence of statistical significance in most interaction tests is compatible with homogeneity of the effects among social strata. However, given the overall pattern of associations, consistently indicating the most favourable effect in areas with low social index, it is also plausible that an existing difference was not detected due to limitations of the study, in particular the imperfect classification of social status of the living environment and the limited sample sizes. Few studies have examined how socioeconomic characteristics influence the effectiveness of substance use school-based prevention. If only life skills training approaches are considered, the evidence is extremely scant and based on observations limited to low social class contexts. In fact, evaluation studies have reported preventive effects on alcohol use in low socioeconomic contexts for Botvin's "LifeSkills Training" [38][39][40] as well as for the "keepin' it REAL" curriculum of the Drug Resistance Strategies Project Results from multilevel models adjusted for gender, age, family living situation and baseline status of the outcome: odds ratios (OR) and 95% confidence interval (95%CI) of alcohol-related behaviour for students in the intervention group compared to the controls, by socioeconomic level of the school area. The EU-Dap Study, 18-month follow-up. [41]. None of these studies provided a comparison of the programme impact with upper social context populations. As an exception, the original edition of Project ALERT was proven equally effective in schools with populations of high and low average social level, but the programme resulted in only short lived effects for alcohol use [42].
To our knowledge, only one recent study has investigated how neighbourhoods influence the effectiveness of a school curriculum in preventing alcohol use [43]. This study reported that living in poorer neighbourhoods decreased the programme's effectiveness in one ethnic subgroup of the sample.
A possible explanation for the indication of an effect modification of social environment on problematic drinking in our study is that the curriculum was more relevant to schools with average low socioeconomic status of the population. It is also plausible that neighbourhood disadvantage correlates with lack of educational resources and of social and familial support to adolescents. Therefore, the relative "preventive gain" from school prevention would be higher in these under-privileged contexts. Differential teacher's response to training is another possible explanation. Teachers in schools from socially disadvantaged communities may have taken a greater advantage of the training, improving their capability to conduct interactive teaching to a larger extent than teachers in communities of medium or high socioeconomic status. It is also possible that contamination occurred to a larger extent in control schools from medium or high socioeconomic areas, if these schools conducted other health promoting interventions, based on skill-enhancing methods similar to the "Unplugged" curriculum.
There are three major weaknesses in this study. First, the sample size was calculated to study the programme effects on the whole sample. Economic and organizational difficulties made it impossible to sample a number of schools sufficient to explore differential effects across sub-groups. Given the need to employ a multilevel analysis, the study had limited statistical power to detect intervention effects for specific subgroups. Despite the lack of power, tendencies in the results were consistent in indicating a higher effectiveness in socially disadvantaged contexts, with significant interaction for one outcome.
Secondly, the participating centres classified the socioeconomic status of school areas using the best locally available indicators and sources of information, that were however different among centres and not validated. This may have lead to measurement error and misclassification of social grouping for some schools. However, since schools in the EU-Dap study were randomised within blocks of social level the misclassification would be independent from the experimental arm as well as from the study outcome. The most probable consequence of this unconditional classification error would be to bias the effect estimates towards the value expected under the null hypothesis, i.e. an underestimation of the effect modification.
Thirdly, intervention and control arms within each socioeconomic level differed on some potential confounders related to baseline characteristics, despite the random assignment. This was probably due to group allocation of a relatively low number of schools within each socioeconomic block. Therefore, some residual confounding within strata could be present, although all analyses were adjusted for measured baseline factors that could constitute potential confounders.
Lack of information on socioeconomic status at the individual level could also be acknowledged as a limitation. However, this was rather the consequence of a deliberated choice, since children's reports of parental occupation or education are generally not reliable [44].
This was one of the few studies designed to consider differential effects of a preventive programme across socioeconomic groups. In fact, the assignment of schools to treatment or control conditions was accomplished through block randomisation that controlled for environmental socioeconomic characteristic, thus achieving a balanced representation of social strata in the study sample. Also, many different behavioural aspects of alcohol use were investigated.
---
Conclusions
The innovative school curriculum evaluated in the EU-DAP study seems to have a beneficial preventive effect on problem drinking, motivating its further dissemination in schools in lower socioeconomic levels.
Since higher prevalence rates of unhealthy behaviours among lower socioeconomic groups contribute substantially to socioeconomic inequalities in health [45], universal prevention programmes that are effective in lower socioeconomic groups may be useful in reducing this socioeconomic gap, one of the major priorities of public health policy in Europe.
---
Authors' contributions
FF and MRG designed the study. MPC and MRG drafted the paper. FF and RB contributed to revising the paper. MPC performed the statistical analysis. The members of the EU-Dap Study Group carried out the intervention and collected the data. MPC has overall responsibility for the paper. All authors contributed to and approved the final manuscript.
---
Competing interests
The authors declare that they have no competing interests. | 21,794 | 1,938 |
345346755849ebe07f12873902434c59ad174d9b | The Influence of Culture Capital, Social Security, and Living Conditions on Children’s Cognitive Ability: Evidence from 2018 China Family Panel Studies | 2,022 | [
"JournalArticle"
] | The aim of this study was to analyze the influence of economic capital, culture capital, social capital, social security, and living conditions on children's cognitive ability. However, most studies only focus on the impact of family socio-economic status/culture capital on children's cognitive ability by ordinary least squares regression analysis. To this end, we used the data from the China Family Panel Studies in 2018 and applied proxy variable, instrumental variables, and two-stage least squares regression analysis with a total of 2647 samples with ages from 6 to 16. The results showed that family education, education expectation, books, education participation, social communication, and tap water had a positive impact on both the Chinese and math cognitive ability of children, while children's age, gender, and family size had a negative impact on cognitive ability, and the impact of genes was attenuated by family capital. In addition, these results are robust, and the heterogeneity was found for gender and urban location. Specifically, in terms of gender, the culture, social capital, and social security are more sensitive to the cognitive ability of girls, while living conditions are more sensitive to the cognitive ability of boys. In urban locations, the culture and social capital are more sensitive to rural children's cognitive ability, while the social security and living conditions are more sensitive to urban children's cognitive ability. These findings provide theoretical support to further narrow the cognitive differences between children from many aspects, which allows social security and living conditions to be valued. | Introduction
This study analyzed the influence of economic capital, culture capital, social capital, social security, and living conditions on children's cognitive ability.
With deepening development and reformation of education, the human capital cultivation of children is becoming a key step for many families. A fundamental aspect of the cultivation is children's cognitive ability, which is the ability of human beings to extract, store, and use information from the objective world. It mainly involves human abstract thinking, logical deduction, and memory (Autor 2014). As documented, there is a significant correlation between family factors and children's cognitive ability (Zimmer et al. 2007;Kleinjans 2010;Li 2012Li , 2017;;Saasa 2018;Fan et al. 2019;Wang and Lin 2021). Specifically, there are three different capital theories that focus on the impact of family on children's cognitive ability, namely economic capital, cultural capital, and social capital (Bourdieu and Wacquant 1992;Farkas 2003). In particular, the impact of cultural capital is particularly important (Li and Zhao 2017;Yao and Ye 2018;Zhang and Su 2018;Hong and Zhang 2021) since economic capital reflects its value only by cultural capital (Hong and Zhao 2014). In addition, there are great differences in economic capital, social capital, and cultural capital between urban and rural families, which leads to the urban-rural education gap (Jin 2019). These findings concentrate on the influence of family capital on children's cognitive ability, but the social security and living conditions are not touched upon. In contrast, this study investigated the influence of all those factors on children's cognitive ability, in particular, the social security and living conditions.
Different family capital has corresponding measurement indicators. In particular, economic capital includes family income (Yang and Wan 2015;Fang and Hou 2019;Hou et al. 2020), health investment (Shen 2019;Wu et al. 2021), and education expenditure (Lin et al. 2021;Fang and Huang 2020), which refers to the sum of economic related resources owned by a family (Xue and Cao 2004). Culture capital is not only reflected in the diplomas obtained by family members, but also in the educational concept, attitude, and expectation of parents for their children (Guo and Min 2006), which includes three forms: concrete culture capital, such as family parenting (Zhang et al. 2017;Huang 2018), lifestyle (Wu et al. 2020), education expectation (Gu and Yang 2013;Wang and Shi 2014;Xue 2018;Zhou et al. 2019), participation (Wei et al. 2015;Liu et al. 2015;Liang et al. 2018); objectified culture capital, including books (Hong and Zhao 2014;Yan 2017); and institutionalized culture capital, referring to the educational diploma obtained (Xie and Xie 2019;Zhu et al. 2018). From the perspective of micro social network, social capital referred to in this paper is defined as a kind of resource embedded in the network (Granovetter 1973), which takes social capital as a new form of capital, so that actors can obtain a better professional position or business opportunities, so as to affect the income return (Lin 2005). In specific, social capital includes occupation (James 2000;Teacherman 2000;Fang and Feng 2005;Zhou and Zou 2016;Zhu and Zhang 2020), social communication (Putnam 2000; Liang 2020; Yang and Zhang 2020), information utilization (Cao et al. 2018;Zheng et al. 2021), and human expenditure (Wang and Gong 2020).
Social security can improve residents' household consumption (Fang and Zhang 2013;Yang and Yuan 2019) and alleviate economic poverty (Guo and Sun 2019) through income redistribution, which can increase the economic capital of families and affect investment in children. Thus, the social security affects children's cognitive ability, including medical insurance (Chen et al. 2020), endowment insurance (Xue et al. 2021), and government support (Liu and Xue 2021;Yin and Fan 2021). Living conditions refer to the family infrastructure and facilities that affect children's lives, including safe drinking water, sanitary toilets, clean energy, waste treatment, and sewage treatment (Zhao et al. 2018). In particular, exposure to air pollution (Chen et al. 2017a;Schikowski and Altug 2020;Nauze and Severnini 2021), water (Chen et al. 2017b;Gao et al. 2021), and fuel (Cong et al. 2021;Chen et al. 2021) also affects cognitive ability. Other factors include family structure (Zhang 2020;Jiang and Zhang 2020), family size (Liu and Jin 2020;Fang et al. 2020), and family health (Li and Fang 2019). Unlike previous work, this study applied instrumental variables and two-stage least squares regression analysis to solve the endogenous problem, assessing the influence of numerous factors on children's cognitive ability. The robustness of this study's results was assessed by controlling sample size and increasing variables.
In addition, children's individual and social characteristics affect cognitive ability. For example, the performance of girls is better than that of boys, although the gender difference is decreasing (Hao 2018). The older the migrant child, the worse the academic performance (Wang and Chu 2019). Number of siblings has a significant impact on youth's cognitive ability (Tao 2019). In contrast, this study investigated heterogeneity in gender and urban location for those influences.
This study examined the impact of numerous factors, including social security and living conditions, on children's cognitive ability, using data from the China Family Panel Studies in 2018. Rather than the ordinary least squares method, the study used two-stage least squares regression to solve endogeneity. In addition, we explored heterogeneity in gender and urban location and the impact of those factors on children's cognitive ability. These results obtained may provide guidance for the government, society, and families to improve children's cognitive ability.
The remainder of this paper is organized as follows. Section 2 describes data, variables, and summary statistics. Section 3 outlines the basic model for the influence of those factors on children's cognitive ability. Section 4 describes the instrumental variable test, endogeneity test, empirical results, and robustness test. Section 5 outlines the heterogeneity analysis of gender and urban location. Section 6 concludes.
---
Data, Variable, and Summary Statistics
---
Data
This study used the data from the China Family Panel Studies (CFPS), a tracking survey of individuals, families, and communities implemented by China Social Science Investigation Center of Peking University, which aims to reflect the changes of China's society, economy, education, and health. The data sample covers 25 provinces/cities/autonomous regions, and the respondents include all family members. In the implementation of the survey, the multi-stage, implicit stratified, and population scale proportional sampling method was used. The main research object of this study was children aged 6-16. Since the respondents of the CFPS personal self-administered questionnaire are children over nine years old, and children's cognition of their own situation is not necessarily accurate, this study mainly used the children's proxy questionnaire and combined the relevant variables such as parents' situation in the personal self-administered questionnaire and family basic information in the family questionnaire. The data supported this work. The basic information related to families, parents, and their children in 2018 was extracted and matched with the data.
---
Explained Variables
Following Li and Shen (2021), Wu et al. (2020), and Dong and Zhou (2019), children's Chinese and math scores were used in this study to measure Chinese cognitive understanding ability and math reasoning cognitive ability, respectively, using the "How about Chinese score" and "How about math score" tests in the CFPS questionnaire, both of which use ordinal categorical variables (1 for "fail", 2 for "intermediate", 3 for "good", and 4 for "distinction").
---
Explanatory Variables
In this study, the main explanatory variables were divided into five parts. They are economic capital, culture capital, social capital, social security, and living conditions.
Economic capital was measured by the family income, children's health investment, and education investment. They are all continuous variables and were added 1 before taking the natural logarithm.
Culture capital was measured by the questions of "How many books do you have in your family?", "What is the highest degree you have completed?", "What level of education do you want your child to attain?", "How often do you discuss what's happening at school with your child?", and "When your children's grades are not satisfactory, which way do you usually deal with them?". They represent the family books, education, educational expectation, educational participation, and parenting style, respectively. There are three aspects of culture capital, namely the objective, institutional, and concrete culture capital (Bourdieu and Passeron 1977). For family education and education expectation, 0 is for illiterate/semi-illiterate, 1 for nursery, 2 for kindergarten, 3 for primary school, 4 for junior middle school, 5 for senior middle school, 6 for junior college, 7 for undergraduate, 8 for master, and 9 for doctor. For parenting style, we redefined scolding the child, spanking the child, and restricting the child's activities as 0, and contacting the teacher, telling the child to study harder, helping the child more, and doing nothing as 1. Among them, 0 is for stern parenting, and 1 is for gentle parenting. Family books and children's education participation are continuous variables, and the number of books was added 1 before taking the natural logarithm. In addition, family lifestyle consists of smoking, drinking, exercise, and lunch break, which is an ordered variable. Social capital was measured by "nature of work", "information utilization", "social communication", and "human expenditure". For job, 1 is unemployed, 2 is agricultural work, and 3 is non-agricultural work. We used the questions of "Do you use a mobile phone?", "Do you use mobile devices?", and "Do you use a computer to surf the Internet?" to measure the information utilization. We defined information utilization as follows: 0 means that none is used, 1 means that at least one is used, 2 means that at least two are used, and 3 means that at least three are used. The questions of "How good do you think your relationship is?" and "How do you rate your trust in your neighbors?" were used to measure the social communication. We summed and then averaged the answers to these two questions and obtained a continuous variable. Human expenditure is a continuous variable and was added 1 before taking the natural logarithm.
Social security was measured by the participation of medical and endowment insurance and government support. Among them, medical and endowment insurance are continuous variables. For government support, 0 is for not accepting subsidies, and 1 is for accepting the subsidies.
Living conditions were measured by the questions about "water for cooking", "cooking fuel", and "indoor air purification", and the answer 0 is for no and 1 is for yes. Specifically, for tap water, 0 represented no tap water use, and 1 is for tap water use. For cooking fuel, 0 is for no use of clean fuel, and 1 is for clean fuel use. For air purification, 0 is for no air purification, and 1 is for use of air purification. In addition, for gender, 0 is for women and 1 is for men. The registered residence was redefined: 0 is for rural, and 1 is for urban. The registered marital status was redefined: 0 is for unmarried, and 1 is for married. For nationality, 0 is for others, and 1 is for Han nationality. Family age, the child's age, and family size are the continuous variables. For family health, 1 denotes unhealthy, 2 relatively unhealthy, 3 average, 4 relatively healthy, and 5 very healthy. We used the question "How many times a week do you eat with your family?" to measure parenthood, which is a continuous variable.
In addition, we consider parents' cognitive ability as proxy variable of genes. According to the CFPS in 2018 for the children's questionnaire, the respondents may be father or mother. Following Li and Zhang (2018), we select two dimensions of father's or mother's word ability and mathematical ability to construct parents' cognitive ability indicators. To compare, we standardized the scores of word ability and mathematical ability, and added up to obtain a comprehensive cognitive ability, which is recorded as family cognitive ability.
Table 1 shows the summary statistics of variables. By deleting invalid values, 2647 final valid samples were included. As shown in Table 1, for children's characteristics, approximately 54% of children were boys, 46% were girls, 43% lived in urban areas, 57% lived rurally, and the children's age ranged from 6 to 16. For family characteristics, approximately 35% were male, 65 were female, 96% had a spouse, the family age ranged from 18 to 78, and the average family size was 5.
For family economic capital, the mean values of family income, children's health investment, and education investment are 10.74, 4.30, and 7.27, respectively. Education investment is significantly greater than health investment. For family culture capital, approximately 89% of families adopted a mild parenting approach, the frequency of families talking with their children is 3.26, the average educational level of the family is primary school, and the family education expectation is undergraduate. The average value of family lifestyle is 1.87, indicating that families account for at least two of smoking, drinking, exercise, and lunch break. The average number collected books in the family is 2.51. Institutionalized and materialized cultural capital are not high, but the level of morphological cultural capital is relatively high, indicating that families pay more attention to education.
For family social capital, family non-agricultural employment is significantly greater than agricultural employment or unemployment; the average family information and human expenditure are 1.79 and 7.372, respectively; the popularity of social communication is 6.83; and the family social capital is moderate to good. For family social security, every family has at least one kind of medical insurance and endowment insurance, and at least half of the people have received government subsidies. For living conditions, the values for utilities of tap water, fuel, and air purification are 73%, 70%, and 3%, respectively; the popularity of tap water and clean fuel is high, while the popularity of air purifiers is low. In addition, children's Chinese and math cognitive ability were both moderate; the average cognitive ability of math is higher than that of Chinese.
For family cognitive ability, the average of Chinese and math cognitive ability is 18.33 and 8.74, respectively, and the overall level of family cognitive ability is not high. We included the standardized and aggregated comprehensive family cognitive ability in Table 1, with a maximum of 4.70 and a minimum of -3.53.
---
Basic Model
This study included 29 characteristics as covariates. To investigate effect of those factors on children's Chinese cognitive ability and math cognitive ability, respectively, we established the following model.
E ni = β 0 + 3 ∑ k=1 β k1 C ki + 6 ∑ j=1 β j2 F ji + 20 ∑ l=1 β l3 S li + ε i (1)
where E ni is the n-th cognitive ability for the child i (n = 1, 2, where 1 is for Chinese and 2 for math); C ki is the k-th children's characteristics for the child i (k = 1, 2, . . . 3); F ji is the j-th family information for the child i (j = 1, 2, . . . , 6); S li is the l-th family capital and family cognitive ability for the child i (l = 1, 2, . . . , 20); β k1 , β j2 , and β l3 are the corresponding parameters to those variables, and ε i is the regression error term.
Through the above model, we used ordinary least squares (OLS) regression to obtain results. However, due to the reverse causal relationship and confounding factor, we had to find proxy variable to genetic, instrumental variables to solve endogeneity, and verify them according to the assumptions. Thus, we used two-stage least squares (2SLS) as the main empirical approach and compared with ordinary least squares (OLS). As a robustness check, we conducted analysis by adding variables and controlling sample size. In addition, the heterogeneity in gender and urban location was checked based on two-stage least squares (2SLS).
As for the sharing genes and environment between parents and children being concerned, we make the following discussion. On the one hand, the social environment experienced by children and their parents is different. In specific, the children studied in this paper were born in the 21st century, so they did not experience major social changes and disasters. However, their parents have experienced great social changes, for example, cultural revolution, educational reform, and natural disasters. On the other hand, the inequality of family resources will lead to the inequality of children's cognitive ability and early skills dependent partly on genetics (Plomin and Stumm 2018;Silventoinen et al. 2020). Thus, these two factors usually produced an interesting phenomenon, that is, the higher the importance of one, the smaller the other. However, as resulted by Houmark et al. (2020), the relative importance of genes depends on how parents' investment is distributed among their children, whether parents or society are. As also resulted by Victor Ronda et al. (2020), the worse the childhood environment, including family resources, the weaker the role of their genes. In addition, as proved, cognitive ability can be developed through acquired cultivation (Hu and Xie 2011;Kuang et al. 2019;Zhou et al. 2021), but the cognitive ability, in this paper, refers to children's word understanding ability and mathematical reasoning ability, which are measured by the scores of Chinese and math tests, respectively, and not measured by IQ test scores, though IQ test scores largely depend on genes. Furthermore, as observed from the samples in CFPS data, Chinese and math cognitive abilities of children with the same family ID were inconsistent. In particular, since the data of the 2018 China Family Panel Studies that we applied in this work do not provide genetic information, we take parents' cognitive ability as the proxy variable of genes in regression analysis.
In this study, proxy variables meet the following two conditions: (1) After introducing proxy variables (parental cognitive ability), there is no correlation between family capital and genes. Indeed, following Zheng et al. (2018), family capital is an acquired environmental factor. (2) Once the genes are observed, parents' cognitive ability will no longer mainly explain children's cognitive ability. Specifically, parental cognitive ability is highly correlated with their genes, and parental cognitive ability is not collinear with other explanatory variables. As checked, parental cognitive ability is not related to random error, and family cognitive ability can be used as a proxy variable to reflect the genetic difference.
Following Cui and Susan (2022), instrumental variables and two stage least squares regression are applied. In particular, when the exposed group and the non-exposed group are not comparable, some background variables need be used to stratify the total group so that the exposed sub-group and the non-exposed sub-group are comparable. Instrumental variable analysis can control those bias in observational studies (Geng 2004;Brookhart et al. 2006). The instrumental variables and two stage least squares analysis in this paper will be shown in Section 4.2.
---
Results
---
Results from OLS
Using the survey data of CFPS in 2018, we successively incorporated family cognitive ability and family capital into the regression and applied the ordinary least squares (OLS) method to investigate the influence of family economic capital, culture capital, social capital, social security, living conditions, and family cognitive ability on children's Chinese and math cognitive ability. After excluding the influence of collinearity, the results are shown in the second to fifth column of Table 2. As shown in second and third columns of Table 2, the effect of family cognitive ability on children's cognitive ability was significant (0.054, p < 0.01), i.e., the shared genes partly determine children's cognitive ability. As shown in the fourth and fifth columns of Table 2, the effect of family cognitive ability is no longer significant, i.e., the role of genes will be weakened by family capital. This has also been confirmed in Victor Ronda et al. (2020). Besides, children's age (-0.053, p < 0.01) and gender (-0.287, p < 0.01) have significant influence on Chinese cognitive ability, while only children's age (-0.096, p < 0.01) has significant influence on math cognitive ability. The influence of children's age and gender on the two cognitive abilities are both negative, while family age (0.005, p < 0.05; 0.007, p < 0.01) has a positive effect on their children's cognitive ability for Chinese and math.
For family culture capital, family education (0.081, p < 0.01; 0.085, p < 0.01), education expectation (0.122, p < 0.01; 0.163; p < 0.01), and family books (0.020, p < 0.1; 0.019, p < 0.1) have a positive impact on the two cognitive abilities. Among them, education expectation has the greatest impact, followed by family education and family books, and the influence of education expectation and family education on math cognitive ability is greater than that of Chinese, while the influence of family books is opposite. The more frequently families participate in education (0.082, p < 0.01; 0.058, p < 0.01), the better their children's cognitive abilities, and the impact on Chinese cognitive ability is greater than the impact on math. For family social capital, the impact of social communication on both children's Chinese (0.045, p < 0.01) and math (0.038, p < 0.01) cognitive abilities is positive. For living conditions, only tap water (0.089, p < 0.05) exhibited a positive impact on children's Chinese cognitive ability. In general, cultural capital has the greatest impact, followed by living conditions and social capital. However, the influence of family economic capital is not significant. The above results are based on ordinary least squares (OLS).
---
Endogeneity Test
In Equation (1), to avoid the endogenous problems caused by omitted variables, we consider the children's characteristics and family information, including age, gender, nationality, residence, marriage, and family size. These variables have been proved to have an impact on children's cognitive ability in previous studies. In this model, the main endogenous problems may be caused by the confounding factors and mutual causality. For example, children of high cognitive ability may have better genes than those of low cognitive ability. If children of high cognitive ability do not receive the acquired training, they are also more likely to obtain high cognitive ability, since their genes are excellent. However, as summarized by Miettinen and Cook (1981), confounding factors are independent risk factors; the distribution of confounding factors in exposed population and non-exposed population is different. So, we take family cognitive ability as poxy variable of genes.
Family books and family medical insurance passed the test of endogenous variables, while the family cognitive ability did not. Possible causes are confounding factors or mutual causality. For mutual causality, family books and family medical insurance may affect children's cognitive ability. Conversely, children of higher cognitive ability may have more books bought for them by their parents to support and encourage them, and the medical insurance decision will also change (Zhang and Li 2021). Therefore, we solve these problems by selecting appropriate instrumental variables. Specifically, we adopted instrumental variables (IVs) and two-stage least squares (2SLS). We used the lag variable Bookiv as the instrumental variable of family books and the average participation rate of medical insurance (Mediv) in 28 provinces as the instrumental variable of medical insurance.
Our instrumental variables satisfy the assumptions of IVs (Angrist et al. 1996). Specifically, Bookiv is highly correlated with family books, and its impact on children's cognitive ability is realized through family books, rather than directly affecting children's cognitive ability. For Mediv, which is highly correlated with family medical insurance, the average participation rate does not have a direct impact on children's cognitive ability. No other confounding factors exist between instrumental variables and children's cognitive ability. In the previous literature, the factors that affect children's cognitive ability were included in the regression to avoid the influence of confounding factors. To ensure that the IV estimation was reliable, we used the weak instrumental variable test, and as the result show, family books and medical insurance are endogenous variables. Furthermore, the Cragg-Donald-Wald F is 30.984, which is obviously greater than 10.
As shown in sixth and seventh columns of Table 2, children's age (-0.055, p < 0.01) and gender (-0.284, p < 0.01) have significant influence on their Chinese cognitive ability. The influence of children's age and gender on the two cognitive abilities is negative, while the influence of family age (0.006, p < 0.05; 0.008, p < 0.01) is positive. For family culture capital, family education (0.087, p < 0.01; 0.090, p < 0.01), education expectation (0.116, p < 0.01; 0.158; p < 0.01), and books (0.101, p < 0.05; 0.089, p < 0.1) have a positive impact on the two cognitive abilities. Similarly, education expectation has the greatest impact, followed by family education and books, and the influence of education expectation and family education on math cognitive ability is greater than that of Chinese, respectively, while the influence of family books is the opposite. The more frequently families participate in education (0.078, p < 0.01; 0.055, p < 0.01), the better their children's cognitive abilities, and the impact on Chinese cognitive ability is greater than on math. For family social capital, the impact of social communication on both children's Chinese (0.048, p < 0.01) and math (0.039, p < 0.01) cognitive abilities is positive. In addition, for family social security, medical insurance (-1.427, p < 0.01; -1.273, p < 0.01) has negative impact on both Chinese and math cognitive abilities, while endowment insurance (0.229, p < 0.01; 0.183, p < 0.05) has positive impact on both Chinese and math cognitive abilities. Tap water (0.091, p < 0.1) has a positive impact on children's Chinese cognitive ability. After introducing instrumental variables, the impact of family books and medical insurance on children's cognitive ability increased. The above results are based on the two-stage least squares (2SLS).
---
Robustness Checks
To verify the reliability of the estimated results, we carried out robustness checks using three methods. Specifically, we controlled the sample size and the number of explanatory variables and took the family health and family relationship into account. Family health refers to the self-evaluation of family health: 1 for unhealthy and 5 for healthy. Family relationship is a continuous variable measured by the number of meals with family members.
As shown in the second and third columns of Table A1 in Appendix A, children's age (-0.055, p < 0.01; -0.098, p < 0.01), children's gender (-0.284, p < 0.01, for Chinese), family age (0.007, p < 0.05; 0.009, p < 0.01), family education (0.086, p < 0.01; 0.089, p < 0.01), education expectation (0.115, p < 0.01; 0.157, p < 0.01), books (0.105, p < 0.05; 0.093, p < 0.05), education participation (0.076, p < 0.01; 0.054, p < 0.01), social communication (0.044, p < 0.01; 0.035, p < 0.01), medical insurance (-1.450, p < 0.01; -1.287, p < 0.01), endowment insurance (0.236, p < 0.01; 0.188, p < 0.05), and tap water (0.092, p < 0.05, for Chinese) still have significant influence on children's cognitive ability. Family health (0.038, p < 0.05; 0.041, p < 0.05) has a positive impact on the two cognitive abilities. Similarly, as shown in the fourth, fifth, sixth, and seventh columns in Table A1 in Appendix A, the significance remains unchanged. Therefore, the results based on 2SLS are robust.
---
Heterogeneity Analysis
The heterogeneity was checked to determine the influence of family factors on children's Chinese and math cognitive abilities.
---
Heterogeneity in Gender
As shown in Table A2 in Appendix A, for family culture capital, the influence of family education (0.100, p < 0.01; 0.102, p < 0.01, for girls) and education participation (0.133, p < 0.01; 0.104, p < 0.01, for girls) on girls' cognitive ability is greater than that of boys. The influence of family education expectation on girls' (0.157, p < 0.01) Chinese cognitive ability is greater than that of boys (0.093, p < 0.01), while the influence of family education expectation on boys' (0.162, p < 0.01) math cognitive ability is greater than that of girls (0.157, p < 0.01). Family books (0.135, p < 0.1) only have a significant impact on girls' Chinese cognitive ability. For family social capital, social communication has the greatest impact on girls' cognitive ability (0.054, p < 0.01; 0.049, p < 0.05, for girls). For social security, medical insurance (-1.958, p < 0.05; -1.619, p < 0.05, for girls) and endowment insurance (0.298, p < 0.05; 0.271, p < 0.05, for girls) have the greatest impact on girls' cognitive ability. For living conditions, only tap water has a positive impact on boys' math cognitive ability (0.145, p < 0.05). In addition, the larger the family size, the greater the impairment of boys' math cognitive ability. Therefore, the culture capital, social capital, and social security are more sensitive to girls' cognitive ability, while living conditions are more sensitive to boys' cognitive ability.
---
Heterogeneity in Urban Location
As shown in Table A3 in Appendix A, for family culture capital, the influence of family education on the cognitive ability of rural children (0.101, p < 0.01; 0.116, p < 0.01) is greater than that of urban children (0.065, p < 0.05; 0.069, p < 0.05). Family education expectation has the greatest impact on rural children's math cognitive ability (0.191, p < 0.01) and urban children's Chinese cognitive ability (0.123, p < 0.01). Family books only affects the math cognitive ability of urban children (0.108, p < 0.1). Family education participation has the greatest impact on rural children's Chinese cognitive ability (0.092, p < 0.01) and the least impact on urban children's Chinese cognitive ability (0.054, p < 0.1). For social communication, the impact on the cognitive ability of rural children (0.057, p < 0.01; 0.039, p < 0.05) is greater than that of urban (0.041, p < 0.05; 0.035, p < 0.1). Medical (-1.468, p < 0.01; -1.087, p < 0.05) and endowment insurance (0.243, p < 0.05; 0.193, p < 0.1) have a significant impact on the cognitive ability of urban children but not on rural children.
For living conditions, only tap water (0.149, p < 0.1) was significant for urban children's Chinese cognitive ability. Therefore, the culture capital and social capital are more sensitive to rural children's cognitive ability, while the social security and living conditions are more sensitive to urban children's cognitive ability.
---
Conclusions
This study used the data from the 2018 China Family Panel Studies to analyze the impact of numerous factors on children's Chinese and math cognitive ability.
Firstly, children's and family's characteristics have significant impact on children's Chinese and math cognitive ability. Among them, children's age, gender, and family size are negative for children's cognitive ability, while family age has a positive impact on children's cognitive ability. Family culture capital, education, education expectation, books, and education participation have a positive impact on children's cognitive ability. For family social capital, the more family social communication, the higher children's cognitive ability. For family living conditions, family use of tap water is more conducive to the improvement of children's cognitive ability. What is more, the influence of family cognitive ability on children's cognitive ability is attenuated by the family capital, which means that the impact of genes are weakened. The above results are based on ordinary least squares (OLS). After introducing instrumental variables Bookiv and Mediv and solving endogeneity, some changes took place in the results. On the one hand, the influence of family books on children's cognitive ability increased significantly. On the other hand, the impact of medical insurance and endowment insurance on children's cognitive ability became significant. Medical insurance was negative, and endowment insurance was positive. In addition, according to the two-stage least squares (2SLS) method, the results are robust after controlling the sample size and increasing the variables.
Moreover, there is heterogeneity in gender and urban location for the influence of numerous factors on children's Chinese and math cognitive ability. In regard to gender, the culture capital, social capital, and social security are more sensitive to girls' cognitive ability, while living conditions are more sensitive to boys' cognitive ability. Specifically, girls' family education, education expectation, books, education participation, social communication, and medical and endowment insurance have a greater impact on cognitive abilities, and tap water is significant for the math cognitive ability of boys. In urban locations, the culture capital and social capital are more sensitive to rural children's cognitive ability, while the social security and living conditions are more sensitive to urban children's cognitive ability. Specifically, rural children's family education, education expectation, education participation, and social communication have a greater impact on cognitive ability, while urban children's family books, medical insurance, endowment insurance, and tap water are more significant for their cognitive ability.
There are some open problems following this research. Due to the imbalance of the initial sample proportion, the proportions of agricultural residence and non-agricultural residence samples were slightly unbalanced after data processing. The heterogeneity in urban location may lead to a slight bias in our full sample model. The error terms of the model may not be independently identically distributed. In addition, there may be further heterogeneity for the influence of numerous factors on children's Chinese and math cognitive ability, and a full mediation analysis should be worthwhile in the future. In this study, we take family cognitive ability as proxy variable of genes, but the empirical results reported in this study are worth checking in full data directly including genetics and environment.
Those findings above provide theoretical support to further narrow the cognitive differences between children.
---
Data Availability Statement: Data used in this paper can be found from the China Family Panel Studies, http://www.isss.pku.edu.cn/cfps/ (accessed on 13 March 2022).
---
Author Contributions: Conceptualization, X.D.; methodology, X.D.; analysis, X.D. and W.L.; investigation, X.D. and W.L.; data curation, W.L.; writing-original draft preparation, W.L.; writing-review and editing, X.D.; supervision, X.D.; project administration, X.D.; funding acquisition, X.D. All authors have read and agreed to the published version of the manuscript.
---
Conflicts of Interest:
The authors declare no conflict of interest.
---
Appendix A
| 36,311 | 1,659 |
4a1365cfd58b3361daa087d4ebca57af06a1b7e7 | Truth in an Age of Information | 2,022 | [
"JournalArticle"
] | Many of the issues in the modern world are complex and multifaceted: migration, banking, not to mention climate change and Covid. Furthermore, social-media, which at first seemed to offer more reliable 'on the ground' citizen journalism, has instead become a seedbed of dis-information. Trust in media has plummeted, just when it has become essential. This is a problem, but also an opportunity for research in HCI that can make a real difference in the world. The majority of work in this area, from various disciplines including datascience, AI and HCI, is focused on combatting misinformation -fighting back against bad actors. However, we should also think about doing better -helping good actors to curate, disseminate and comprehend information better. There is exciting work in this area, but much still to do. | INTRODUCTION
Falsehood flies, and truth comes limping after it.
Jonathan Swift, The Examiner No. 14, Thursday, 9th November 1710
Politicians have always been 'economical with the truth' and newspapers have toed an editorial line. However, never in recent times does it seem that confidence in our media has been lower. From the Brexit battle bus in the UK to suspected Russian meddling in US elections, fake news to alternative facts -it seems impossible for the general public to make sense of the contradictory arguments and suspect evidence presented both in social media and traditional channels. Even seasoned journalists and editors seem unable to keep up with the pace and complexity of news. These problems were highlighted during Covid when understanding of complex epidemiological data was essential for effective government policy and individual responses. As well as the difficulty of media (and often government) in understanding and communicating the complexity of the situation, various forms of misinformation caused confusion. There are obvious health impacts of this misinformation due to taking dangerous 'cures' (Nelson, 2020) and vaccination hesitancy (Lee, 2022a), as well as its role in encouraging violence against health workers (Mahase, 2022). In addition, a meta-review of many studies of Covid misinformation identified mental health impacts as also significant (Rocha, 2021).
If democracy is to survive and nations coordinate to address global crises, we desperately need tools and methods to help ordinary people make sense of the extraordinary events around them: to sift fact from surmise, lies from mistakes, and reason from rhetoric. Similarly, journalists need the means to help them keep track of the surfeit of data and information so that the stories they tell us are rooted in solid evidence.
Crucially in increasingly politically fragmented societies, we need to help citizens explore their conflicts and disagreements, not so that they will necessarily agree, but so that they can more clearly understand their differences.
These are not easy problems and do not admit trite solutions. However, there is existing work that offers hope: tracking the provenance of press images (ICP, 2016), ways to expose the arguments in political debate (Carneiro, 2019), even using betting odds to track the influence of news on electoral opinion (Wall, 2017).
I hope that this paper will give hope that we can make a difference and offer challenges for future research.
---
THE B-MOVIE CAST OF MISINFORMATION
Deliberate misinformation is perhaps the most obvious problem we face. There is extensive data science studies by academics and data journalists attempting to understand the extent and modes of spread (e.g. Albright, 2016;Vosoughi, 2018).
Crucially false information appears to spread more rapidly than true information; possibly because it is more novel (Vosoughi, 2018). Although there is considerable debate as to the sufficiency of their responses, both Facebook and Twitter are constantly adjusting algorithms and policies to attempt to prevent or discourage fake news (Dreyfuss, 2019;NPR, 2022;Twitter, 2022). Within the HCI community there has been considerable work exploring the human aspects around the spread of misinformation online (Flintham, 2018, Geeng, 2020;Varanasi, 2022), ways to visualise it (Lee, 2022b), tools for end-users to help identify it (Heuer, 2022) and CHI workshops (Gamage, 2022;Piccolo, 2021).
---
Bad Actors
Much of the focus on misinformation is on 'bad actors': extremist organisations, 'foreign' powers interfering in elections, or simply those aiming to make a fast buck. In the context of mis-information, 'bad' can mean two things: 1. They are intrinsically bad people, bad states, or bad media. 2. They use bad methods and/or spread bad information (including misinformation and hateful or violent content). The first of these can be relative to clear criteria such as human rights or terrorism, but may simply mean those we disagree with; and, of course, the boundary between the two may often be unclear.
When the two forms of 'bad' agree the moral imperative is clear, even though implementation may be harder. Forced in part by government and popular pressure, social media platforms have extensive mechanisms both to attempt to suppress bad information and suspend accounts of those who promulgate it (Guardian, 2018).
Probably the most high-profile example of the later was Twitter's suspension of @realDonaldTrump. This was both met with widespread relief, but also caution due to its potential impact on free speech (Noor, 2021), especially given Twitter's arguments for why it was suspended when it was (Twitter, 2021).
Of course, sometimes bad actors may spread true (or even good) information.
In some cases this is simply because few are altogether bad. For example, those who believe and then promulgate Covid conspiracy theories; many will be well meaning, albeit deeply misguided, and some of the information may be accurate.
However, true information can also be cynically used to give credence to otherwise weak or misleading arguments; for example a recent study of cross-platform misinformation (Micallef, 2022) found a substantial proportion of cases where a YouTube video with true information about Covid was referenced by a tweet or post that in some way mis-interpreted the material or used it out of context. In addition, many Astroturfing accounts will distribute accurate information as a means to create trust before disseminating misinformation. It can be hard to distinguish these and it is not uncommon for politicians or other campaign groups to inadvertently re-tweet or quote true or at least defensible information that originated from very unsavoury groups, thus giving them credence.
---
When Good Actors Spread Bad Information
As we saw in the last example, those we regard as 'good' actors can also sometimes spread bad information. Sometimes this is deliberate. An extreme case is during war when misinformation campaigns in an enemy country are regarded as a normal and indeed relatively benign form of warfare (Shaer, 2017). In peace time deliberate misinformation is likely to be less extreme and more often stretching or embroidering the truth, or selectively reporting.
It may also be accidental. For example, Figure 1 shows a "Q&A" (form of fact check) on the BBC news web site following a claim made by Boris Johnson in January 2018 regarding UK contributions to the EU budget. The overall thrust of the Q&A is correct, the net amount that was sent to the EU at that time was substantially less than the £350 million figure that Johnson claimed, but the actual figures are wrong, the Q&A suggested that around 2/3 of the gross figure was returned, when the actual figure was close to a half. This is probably because at some point a journalist lost track of which figure the half was referring to, but the overall effect was to create a substantially incorrect figure. (BBC, 2018). Note this Q&A pop-up is no longer in the news item; instead there a link to a 'Realty Check' page which is correct, but with no explicit retraction.
In between are the subtle biases are simply assumptions of journalists that play out in the selection of which stories to report and also in the language used. For example, in crime or conflict reporting passive language may be used ("the assailant was shot", or "shells fell on") compared with active language ("AAA shot BBB" or "XXX fired shells on") depending on which side is doing the shooting or bombing.
Personally, while I may despair or be angry at the misinformation from those with whom I disagree, I am most upset when I see poor arguments from those with whom I agree. This is partly pride, wanting to be able to maintain a moral high ground, and partly pragmatic, if the arguments are poor then they can be refuted.
In an age of adversarial media, any mistakes, misrepresentation or hyperbole can be used to discredit otherwise well-meaning sources and promote alternatives that are either ill-informed or malicious. This was evident in the US during the 2016 presidential campaign when many moderate Republican supporters lost faith in the reputable national press in favour of highly partisan local papers; a trend which has intensified since (Gottfried, 2021;Meek, 2021) 3 SEEKING TRUTH
---
The Full Cast
We have already considered the 'B-movie' bad/good guy roles, of the producers and influencers, both of whom can mislead whether ill-intentioned or illinformed. In reality even the 'bad' actors may be those with genuinely held, albeit unfounded, beliefs about 5G masts or communist take-over of US government. Of course, those of us who would consider ourselves 'good' actors, may still distort or be selective in what we say albeit for the best of reasons.
In addition, those who receive misinformation and are confused or misled by it may differ in levels of culpability. It is easier to believe the things that make life easier, whether it is the student grasping at suggestions that the impact of Covid may be over exaggerated in order to justify a party, or the professional accepting climate change scepticism to justify buying that new fuel-hungry car.
Of course, the purveyors of news and information are under pressure, and may not be wholly free in what they say, or may run risks if they do. Even in the last year we have seen many journalists, bloggers and authors arrested, sanctioned, stabbed and shot.
Perhaps more subtle is the interplay within the ecology of information: journalists and social media modify what and how they present information in order to match the perceived opinions and abilities of their readership.
---
Two Paths
The greatest effort currently appears to be focused on fighting back against bad actors. This includes algorithms to detect and counter mis-information, such as Facebook's intentions to weed out anivaccination. These are predominantly aimed at the bad actors.
However, in addition we need to think about doing better, ways for the good actors to disseminate and understand information so that they are in a better position to evaluate sources of information and ensure that they do not inadvertently create bad information.
We'll look briefly at four areas where appropriate design could help us to do better: • echo chambers and filter bubbles • better argumentation • data and provenance • numeric data and qualitative-quantitaive reasoning These are not the only approaches, but I hope they will stimulate the reader to think of more.
---
Echo Chambers and Breaking Filter Bubbles
Social media was initially seen as a way to democratise news and information sharing and to allow those in the 'long-tail' of small interest groups to find like-minded people in the global internet. However, we now all realise that an outcome of this has been the creation of echo chambers, where we increasingly only hear views that agree with our own.
In some ways this has always been the case, both in choices of friendship groups for informal communication and the audiences of different newspapers. However, social media and the personalisation of digital media has both intensified the effect and made it less obvious -you know that a newspaper has a particular editorial line, but do not necessarily recognize that web search results have been tuned to your existing prejudice. This is now a well-studied area with extensive work analysing social media to detect filter bubbles an understand the patterns of communication and networks that give rise to them (Terren, 2021, Garimella, 2018;Cinelli, 2021). Notably, one of these studies (Garimella, 2018) highlighted the role of 'gatekeeper', people who consume a broad range of content, but then select from this to create partisan streams. Perhaps more sadly, the same study notes that those who try to break down partisan barriers pay a "price of bipartisanship" in that balanced approaches or multiple viewpoints are not generally appreciated by their audiences.
In addition, there has been work on designing systems that in different ways attempt to help people see beyond their own filter bubbles (e.g. Foth, 2016;Jeon, 2021), but on the whole this has been less successful, especially in actual deployment. Indeed, attempts to present opposite arguments can end up deepening divides if they are too different and too soon.
---
Argumentation
It is easy to see the flaws in arguments with which we disagree, we know it is wrong and can thus hunt for the faults -the places where our intuitions and the argument disagree are precisely the places where we are expecting holes in the reasoning. Of course, we all create bad arguments. It is very hard to notice the gaps in one's own reasoning, but also the fallacious arguments of others when one agrees with their final conclusions.
Of course, those who disagree with us will notice the gaps in our arguments, thus increasing their own confidence and leading them to discount our opinions! It is crucial therefore to have tools that both help the public to interrogate the arguments of politicians and influencers, and also to help those who are aiming to create solid evidence-based work (including academics) to ensure valid arguments.
There is of course long-standing work on argumentation systems, such as IBIS (Noble, 1988) and work in the NLP community to automatically analyse arguments. Much of this is targeted towards more professional audiences, but there are also steps to help the general public engage with media, such as the Deb8 system (Carneiro, 2019) developed at St Andrews, an accessible argumentation system that allows viewers of a speech or debate to collaboratively link assertions in the video to evidence from the web. This is an area which seems to have many opportunities for research and practical systems aimed at different audiences including the general public, journalists, politicians, academics, and fact checkers. This could include broad advice, for example, ensuring that fact checkers clearly state their interpretation of a statement before checking it to avoid inadvertently debunking a strawman misinterpretation. Similarly, we could imagine templates for arguments, for example, given an implication of the form "if A then B", it is important to keep track of the assumptions. In particular, while more formal logics and some forms of argumentation schemes focus on low-level argumentation, it seems that the tools needed perhaps need to focus on the higher-level argumentation, the information and assumptions that underly a statement, more than the precise logic of the inference.
In addition, in the AI community there are now a variety of tools to help automatically detect possible bias in data or machine learning algorithms. Maybe some of these could be borrowed to help human reasoning, for example shuffling aspects of situations (e.g. gender, political party or ethnicity), to help us assess to what extent our view is shaped by these factors.
---
Data and Provenance
One of the forms of misinformation is the deliberate or accidental use of true information or accurate data divorced from its context. For the spoken word or text, this might be a quotation, for photographs or video the choice of a still, segment or even parts edited together that give a misleading impression. Indeed the potential for digital media to be compromised in different ways lead some to look for technology such as blockchains to prevent tampering, or the use of analogue or physical representations (Haliburton, 2021).
One example of work addressing this issue was the FourCorners project (ICP, 2016), a collaboration between OpenLab Newcastle, the International Centre for Photography and the World Press Photo Foundation, which embeds provenance into photographs allowing interrogation such as "what are the frames before and after this photograph?", "are there other photos at the same time and place?". One can imagine similar things for textual quotes, in the manner of Ted Nelson's vision of transclusion (Nelson, 1981), where segments quoted from one document retain their connection back to the original. This is an area I've worked on personally in the past with the Snip!t system, originally developed in 2003 following a study of user bookmarking practice (Dix, 2003). Snip!t allowed users to 'bookmark' portions of a web page and automatically kept track not just of the quoted text, but where it came from (Dix, 2010). Later work in this area by others has included both commercial systems such as Evernote, and academic research, such as Information Scraps (Bernstein, 2008). Currently there is an explosion of personal knowledge management (PKM) apps, some of which, such as Readwise (readwise.io) and Instapaper (instapaper.com), help with the process of annotating documents. However, these systems are mostly focused on retaining the context of captured notes and quotes; we desperately need better ways to retain this once the quote is embedded in another document or web page.
This connection to sources is also important for data. In the example from the BBC in Figure 1, the journalist had clearly lost track of the original data on UK/EU funding and so misremembered aspects.
Can we imagine tools for journalists that would help them keep track of the sources for data and images. Indeed, it would be transformative if everyday office tools such as word processors and presentation software made it easy to keep references to imported images. In work with humanities and heritage, we have noted how file systems have barely altered since the 1970s (Dix, 2022) -the folder structures allow us to store and roughly classify, but there is virtually no support for talking about documents and about their relationships to one another. Semantic desktop research (Sauermann, 2005), which seemed promising at the time, has never found its way into actual operating systems.
Happily there are projects, such as Data Stories (2022) that are helping communities to use data to tell their own stories, so that the online world can allow open discourse and interpretation, whilst connecting to the underlying data on which it is based. Furthermore, one of the popular PKM apps Obsidian (obsidian.md) supports semi-structured meta-data for every note.
---
Numeric Data and
Qualitative-quantitaive Reasoning Going back to the example in figure 1, part of the problem here may well simply be that journalists are often more adept with words than numbers. We are in a world where data and numerical arguments are critical. This was true of Covid where the understanding of exponential growth and probabilistic behaviour was crucial, but equally so for issues such as climate change.
One of the arguments put forward by climate change sceptics, is that it is hard to believe in longterm climate models given forecasters sometimes struggle to predict whether it is going to rain next week. This, at first sight, is not an unreasonable argument; although anyone who has deal with stochastic phenomena knows that it is often easier to predict long-term trends than short-term behaviour. Indeed, it is also relatively easy to communicate this -we can all say with a degree of reliability that a British winter will be wetter and colder than the summer, even though we'll struggle to know the weather from day to day.
This form of argument is not about exact numerical calculation, nor about abstract mathematics, but something else -informal reasoning about numerical phenomena. Elsewhere I've called this qualitative-quantitative reasoning (Dix, 2021a(Dix, , 2021b) ) and seems to be a critical, but largely missing, aspect for universal education. Again this is an area that is open for radical contributions, for example, iVolver (Nacenta, 2017) allows users to extract numerical and other data from visualisations, such as pie charts, in published media. My own work has included producing table recognisers in commercial intelligent internet system OnCue in the dot-com years (Dix, 2000) and more recently investigating ways to leverage some of the accessibility of spreadsheet-like interfaces and simple ways to allow users to combine their own data (Dix, 2016).
---
CALL TO ACTION
We are at a crucial time in a world where information is everywhere and yet we can struggle to see the truth amongst the poorly sourced, weakly argued, deliberately manipulated or simply irrelevant. However, there are clear signs of hope in work that is being done and also opportunities for research that can make a real difference.
Of course, as academics we are also in the midst of a flood of scholarly publication, some more scholarly than others! There are calls for us to 'clean up our own act' too including rigour of academic argumentation (Basbøll, 2018) and transparency of data and materials (Wacharamanotham, 2020). As well as being a problem we need to deal with within academia, it is also an opportunity to use our own academic community as a testbed for tools and techniques that could be used more widely. | 21,083 | 817 |
d86a5045aef6b197e8c01e2bbeba7b3df555432d | Unveiling Gender Dynamics: An In-depth Analysis of Gender Realities | 2,023 | [
"JournalArticle"
] | This journal article presents a comprehensive exploration of gender dynamics through an indepth analysis of gender realities. By delving into the intricate interplay of cultural norms, evolving identities, educational empowerment, and gender-based discrimination, this study sheds light on the complexities shaping gender experiences. The research employs qualitative methods, including semistructured interviews and thematic analysis, to capture diverse perspectives across India. The findings reveal a nuanced spectrum of gender dynamics, emphasizing the intersectionality of identity, the evolving definitions of masculinity and femininity, and the impact of educational opportunities. The study underscores the challenges posed by gender-based discrimination and violence, while also highlighting the potential for progress through policy interventions and societal shifts. Overall, this research contributes to a deeper understanding of the intricate fabric of gender dynamics, urging for concerted efforts towards fostering gender equality and social justice. | INTRODUCTION
Gender, as a fundamental social construct, influences every facet of human life, shaping identities, roles, and interactions within societies (Ram et al., 2023;Pulerwitz et al., 2010). The intricate interplay between societal norms, power structures, and cultural influences has engendered a diverse landscape of gender realities that are often veiled beneath the surface (Nandigama, 2020;Srivastava, 2020). In light of the growing recognition of gender's pivotal role in shaping individual experiences and social structures, this article endeavors to embark on an in-depth analysis of gender dynamics. By delving into the complexities of how gender operates within various contexts, this study aims to uncover the multifaceted dimensions of gender realities and shed light on the underlying mechanisms that influence behaviors, perceptions, and opportunities (Nalini, 2006) (Verma et al., 2006).
As societies progress and evolve, gender-related discussions have gained momentum, bringing to the forefront issues such as gender equality, representation, and violence. Yet, a comprehensive understanding of the nuanced interplay between gender constructs, power dynamics, institutional norms, and cultural influences remains a critical endeavor (Iyer et al., 2007;Apriyanto, 2018). This article stands as a response to the pressing need for an expansive exploration of gender dynamicsone that transcends conventional narratives and delves into the less visible realms of gender interactions. By employing a multidimensional approach that acknowledges the intersectionality of gender with other dimensions of identity, including race, class, and sexuality, this analysis aims to illuminate the complexity of gender realities that often defy simplistic categorizations (Narwana & Rathee, 2017;Prusty & Kumar, 2014).
This study recognizes that the exploration of gender dynamics extends beyond academic inquiry; it has implications for policy, advocacy, and social change. To craft effective strategies that address gender-based inequalities and discrimination, it is imperative to unravel the intricate tapestry of gender's influence on personal lives and societal structures. By unveiling the hidden intricacies of gender dynamics, this research seeks to contribute to a deeper comprehension of the challenges faced by individuals across the gender spectrum, ultimately fostering more informed conversations, evidence-based policies, and a more equitable world (Ahoo & Sagarika, 2020; Scott et al., 2017).
The concept of gender, far from being confined to binary categorizations, exists along a spectrum, encompassing diverse identities, expressions, and experiences. The conventional understanding of gender as a simple dichotomy has been challenged by evolving societal awareness, acknowledging the need for a more inclusive and nuanced perspective. This study recognizes that gender dynamics are not static, but fluid and contextual, influenced by historical legacies, cultural contexts, and socio-economic structures (Gordin & True, 2019; Gupta et al., 2017). While progress has been made in addressing gender disparities, persistent inequalities persist. These inequities are deeply rooted in deeply ingrained gender norms and power imbalances that permeate institutions, policies, and everyday interactions. This investigation seeks to unravel these intricate threads that form the fabric of gender dynamics, unveiling the often hidden and subtle mechanisms that perpetuate these disparities. By examining the complexities of gender dynamics, we aim to contribute to a more profound understanding of the lived experiences of individuals and communities, facilitating a more empathetic and informed approach to fostering gender equality (Park & Maffi, 2019;Goodrich, 2020).
As we delve into this exploration, it becomes evident that gender is not isolated; it intersects with other aspects of identity and inequality. Marginalized groups often face compounded discrimination due to the intersections of race, class, and gender, making the study of gender dynamics an essential step towards dismantling systemic oppression. Through this investigation, we strive to bridge the gap between academic discourse and real-world impact, offering insights that can inform policy decisions, advocacy initiatives, and transformative social change. In the subsequent sections of this article, we present a comprehensive framework for analyzing gender dynamics, drawing on interdisciplinary perspectives and methodologies (Van, 2010; Mutenje et al., 2016). Our analysis is grounded in the understanding that unraveling gender realities requires a multifaceted approach, acknowledging both the visible manifestations and the underlying structures that perpetuate gender norms and disparities. By exploring these dimensions, we aim to contribute to a more holistic understanding of the complexities of gender dynamics and advance the discourse surrounding gender equality and social justice.
Gender dynamics, as a fundamental aspect of social structures, play a profound role in shaping individuals' lives and societal norms. In the diverse and intricate landscape of India, where traditions, cultures, and socioeconomic contexts intermingle, understanding the multifaceted nature of gender realities becomes particularly crucial. This article embarks on an in-depth analysis of gender dynamics within the Indian context, aiming to illuminate the complexities, challenges, and opportunities that define gender relations and identities in this diverse nation. India, known for its rich cultural tapestry, is also a country grappling with deeply rooted gender inequalities. The complexities of gender dynamics extend beyond mere biological distinctions, encompassing cultural norms, historical legacies, and contemporary shifts. This study seeks to unveil the nuanced interplay between these factors, delving into the myriad ways in which gender influences roles, expectations, and power dynamics within Indian society. In a country characterized by its diverse ethnicities, languages, and traditions, gender dynamics are influenced by regional variations and historical contexts. While there have been advancements towards gender equality, persistent challenges such as gender-based violence, unequal access to education, and limited political representation continue to shape the gender landscape in India. As such, this study aims not only to reveal the existing gender realities but also to provide a comprehensive understanding of the factors that perpetuate or challenge gender disparities (Tyaqi & Das, 2018).
Recognizing the intersectionality of gender with other dimensions of identity, such as caste, class, and religion, is crucial. These intersections further complicate the dynamics of gender relations, often leading to compounded discrimination and marginalized experiences. By adopting an intersectional lens, this analysis seeks to untangle the intricate threads that weave together the tapestry of gender experiences in India. By uncovering the layers of gender dynamics, this research contributes to a more informed dialogue and evidence-based policymaking. As India continues to strive for progress and equality, an exploration of gender realities is essential to address deeply ingrained inequalities and foster a more inclusive and equitable society. Through this examination, we aim to deepen our comprehension of gender dynamics within the Indian context, providing insights that inform both academic discourse and practical interventions.
---
B. METHOD
This study adopts a qualitative research design to conduct an in-depth analysis of gender dynamics within India. Qualitative research is chosen for its capacity to explore the intricate aspects of gender realities. Semi-structured in-depth interviews and focus group discussions are conducted with a diverse sample to capture a range of perspectives. Thematic analysis is employed to identify recurring patterns and themes in the data, facilitated by NVivo software. Ethical guidelines are followed, obtaining informed consent and ensuring confidentiality. Researchers' reflexivity is acknowledged, and limitations include contextual and interpretation biases. This qualitative approach aims to unravel the complexities of gender dynamics, providing comprehensive insights into cultural norms, power relations, and individual experiences shaping gender realities in India.
---
C. RESULT AND DISCUSSION
The process of thematic analysis delved into the rich tapestry of narratives from diverse participants, uncovering a spectrum of insights that collectively illuminate the multifaceted and often paradoxical gender dynamics deeply embedded in India's societal fabric. As the data was meticulously examined, five overarching themes emerged, each offering a window into the complex interplay of cultural norms, power dynamics, and individual experiences that intricately shape gender realities within the nation.
1. Cultural Perceptions and Gender Norms: The participants' voices resonated with a consistent theme underscoring the profound influence of cultural norms on gender identities and roles. These deeply ingrained norms perpetuate distinct expectations for men and women, often confining them within predetermined roles that restrict opportunities and reinforce unequal power dynamics. Narratives unveiled the tug-ofwar between tradition and progress, as individuals grapple with the juxtaposition of longstanding norms against modern aspirations. 2. Intersectionality of Identity: The vivid mosaic of gender dynamics is further nuanced by the intersection of gender with other dimensions of identity. Participants emphasized how factors like caste, class, and religion intersect with gender, shaping unique experiences and magnifying inequalities. The narratives poignantly revealed that these intersections, while often ignored, have profound implications, leading to layered discrimination and impacting access to resources and opportunities. 3. Evolving Masculinities and Femininities: The research unfurled the evolving perceptions of masculinity and femininity, signaling a shifting socio-cultural landscape.
Traditional definitions of gender roles are gradually making way for more fluid expressions of identity. However, this evolution is not without resistance, as traditional notions of gender are deeply entrenched. The narratives provided insight into the tension that arises when these progressive shifts challenge deeply rooted conventions. 4. Educational Empowerment: Amid the complexity, education emerged as a beacon of change and empowerment. Narratives highlighted how education offers a platform to challenge gender norms, empowering women and marginalized groups to pursue opportunities beyond traditional boundaries. Yet, a stark dichotomy emerged-while education is seen as a powerful tool for change, disparities in educational access persist, particularly in rural areas, where the transformative potential of education remains unrealized. 5. Gender-based Violence and Discrimination: The themes of gender-based violence and discrimination reverberated throughout the narratives, emphasizing the pervasive nature of abuse. Participants shared heart-wrenching stories of harassment, unequal treatment, and systemic barriers that reinforce gender inequalities. The narratives laid bare the urgency of addressing the systemic and cultural factors perpetuating genderbased violence and discrimination. Beyond these focal themes, cross-cutting insights intertwined with the broader analysis. The impact of media in shaping and challenging gender stereotypes emerged as a dual-edged sword. While media can promote progressive ideals, it can also perpetuate harmful norms. Additionally, the discussions on policies and legal frameworks highlighted a complex landscape of opinions on their effectiveness, underlining the need for comprehensive strategies that encompass both systemic reform and cultural change.
The rich tapestry of themes and cross-cutting insights collectively underscores the intricacies of gender dynamics within the Indian context. The intersectionality of identity adds layers of complexity, magnifying the challenges faced by marginalized communities. The evolving definitions of masculinity and femininity reflect a society in transition, where progress coexists with resistance. Cultural norms were revealed as both influential and constraining, emphasizing the need for cultural change alongside policy reform. Education's transformative potential and the pressing concerns of gender-based violence and discrimination together represent a call to action. This comprehensive exploration challenges policymakers, advocates, and society at large to address deeply ingrained inequalities and work towards a more inclusive and equitable future.
---
Social Interaction and Community Engagement
In the rapidly urbanizing landscape of India, where concrete jungles often dominate the horizon, the significance of green spaces transcends mere aesthetics. Beyond providing environmental benefits, green spaces serve as crucial platforms for fostering social cohesion, nurturing a sense of community, and cultivating a shared sense of belonging. This segment of the study delves into the intricate ways in which green spaces facilitate social interactions, encourage community bonding, and engender a deep-rooted sense of belonging among individuals across diverse walks of life. Green spaces, ranging from parks and gardens to community squares, stand as communal havens that draw people from various backgrounds. They act as natural magnets, offering a neutral ground for individuals to converge, communicate, and engage in a myriad of activities. This phenomenon is particularly pronounced in densely populated urban areas, where green spaces become a respite from the hustle and bustle of city life, allowing residents to connect on a human level.
The verdant expanse of green spaces provides a canvas for fostering connections beyond individual identities. Picnics, group exercises, cultural events, and impromptu gatherings become catalysts for forging bonds among neighbors who might not otherwise cross paths. These spaces dissolve social barriers, facilitating interactions between generations, economic classes, and cultural backgrounds. As community members engage in shared activities and collaborate on various initiatives, a sense of collective identity emerges, knitting together a fabric of unity that transcends differences. Perhaps most notably, green spaces nurture a profound sense of belonging among those who frequent them. The communal ownership of these areas fosters a feeling of stewardship and responsibility, strengthening ties between individuals and the land they share. Green spaces often become canvases for community expression, where murals, sculptures, and gardens serve as testaments to collective identity. This sense of belonging extends beyond the immediate vicinity of the green space, fostering a ripple effect that contributes to broader social cohesion within neighborhoods and even entire cities.
The role of green spaces in promoting social interactions and community bonding is especially significant in the context of India's diverse cultural landscape. These spaces serve as platforms where cultural celebrations, performances, and festivals unfold, allowing people to share their heritage with one another. This cultural exchange enhances understanding and appreciation among diverse groups, thereby fostering an environment of inclusivity and mutual respect.
---
Recreational Opportunities: Assessing the Impact of Availability of Recreational Activities on Social Engagement in India
In the dynamic cultural milieu of India, where social interactions are deeply woven into the fabric of daily life, the presence and accessibility of recreational opportunities play a pivotal role in shaping the vibrancy of communities. This segment of the study scrutinizes the spectrum of available recreational activities and their influence on fostering social engagement, connecting individuals across diverse backgrounds, and contributing to the collective wellbeing. India's recreational landscape is a tapestry woven from diverse threads, encompassing both traditional and contemporary activities. From traditional dance forms and religious celebrations to modern sports and entertainment, the array of options reflects the multifaceted nature of the nation's interests. Festivals, community events, sports tournaments, cultural workshops, and outdoor adventures serve as canvases for shared experiences, where people congregate to celebrate, compete, and connect.
Recreational activities act as a social glue, binding individuals together in shared pursuits. Festivals, for instance, transcend religious and regional boundaries, creating platforms for people to unite in celebration. Sports leagues and tournaments not only promote physical fitness but also provide avenues for camaraderie, teamwork, and friendly competition. These activities offer individuals common ground, a space where relationships form, and social networks expand. Recreational opportunities transcend individual pursuits, extending their reach into the realm of collective engagement. Participation in these activities often requires interaction and collaboration, leading to the cultivation of a sense of belonging within a larger community. Whether through volunteering at cultural events, joining book clubs, or participating in local sports teams, individuals engage in a collective endeavor that nurtures social bonds and mutual support.
The availability of a diverse range of recreational activities fosters inclusivity by accommodating a wide spectrum of interests and talents. Individuals of varying ages, backgrounds, and abilities find avenues to express themselves and engage with their community. In this way, recreational activities contribute to breaking down social barriers and creating spaces where diversity is celebrated. The advent of the digital age has also introduced new dimensions to recreational activities. Virtual spaces, social media platforms, and online gaming communities offer avenues for connection that transcend physical boundaries. While fostering virtual connections, these platforms also raise questions about the nature of social engagement in the digital realm and its implications for in-person interactions.
---
Equlity and Access
Within the intricate tapestry of India's socio-environmental landscape, the equitable distribution of green spaces emerges as a potent lens through which to examine issues of social justice and environmental equity. This segment of the study delves into the multifaceted dynamics surrounding the availability of green spaces, particularly their accessibility and benefits for marginalized communities. The investigation seeks to unveil how these spaces can serve as tools for bridging disparities and fostering environmental equity across diverse contexts within India. Green spaces, often emblematic of natural respite and recreation, take on an added dimension as symbols of social justice. As these spaces offer moments of tranquility and interaction, their accessibility becomes a matter of equitable distribution of resources. Examining the distribution of green spaces and their accessibility within different communities provides insights into the allocation of amenities that contribute to social wellbeing.
Green spaces, while providing havens for leisure, exercise, and community engagement, can sometimes perpetuate inequalities if their distribution disproportionately favors privileged communities. Investigating how marginalized communities access and benefit from green spaces is pivotal to understanding broader issues of social justice. The availability of these spaces to all members of society, regardless of economic status, becomes an essential gauge of a society's commitment to inclusivity. The availability of green spaces is intricately linked to environmental inequalities, often reflecting patterns of urban planning and development. Communities with limited access to green spaces may also face exposure to environmental hazards, further exacerbating disparities. By exploring the spatial relationships between green spaces, marginalized neighborhoods, and environmental risks, this inquiry sheds light on the intersections between social justice and environmental concerns.
Green spaces are not merely physical entities but spaces imbued with cultural and social significance. Investigating their equitable distribution extends beyond access to encompass the preservation of cultural heritage. These spaces can serve as anchors for cultural expression and community identity, promoting social cohesion and resilience within marginalized communities. The equitable distribution of green spaces intertwines with issues of environmental justice and public health. Disparities in green space accessibility can impact air quality, mental well-being, and physical health outcomes, disproportionately affecting marginalized communities. Exploring these correlations deepens our understanding of how green spaces contribute to a broader framework of social and environmental justice.
In the intricate web of India's urban and rural landscapes, the accessibility of green spaces becomes a lens through which to examine the extent of equitable distribution and social inclusivity. This section of the study delves into the multifaceted factors that impact access to green spaces, shedding light on how proximity, transportation, and physical barriers collectively shape individuals' opportunities to connect with nature and communal spaces. The geographic proximity of green spaces to residential areas profoundly affects their accessibility. As urban centers expand, ensuring that green spaces are conveniently located becomes paramount. Analyzing the spatial distribution of green spaces relative to population densities provides insights into the effectiveness of urban planning in promoting equitable access. Proximity is a crucial determinant, influencing whether individuals, particularly those from marginalized communities, can integrate these spaces into their daily lives.
The role of transportation infrastructure in mediating access to green spaces cannot be underestimated. Availability of efficient public transport and pedestrian-friendly routes can bridge the gap between neighborhoods and distant parks. Examining transportation options, including walking, cycling, and public transit, offers a nuanced understanding of how communities navigate physical distances to engage with nature. Conversely, inadequate transportation options can create barriers, limiting green space access predominantly to those with private vehicles. Physical barriers, such as highways, water bodies, and infrastructure limitations, can fragment communities and impede access to green spaces. Analyzing the presence of such barriers and their impact on different demographics underscores the intersectionality of accessibility challenges. Marginalized communities often disproportionately bear the brunt of these barriers, reinforcing patterns of exclusion. Evaluating how urban development addresses or perpetuates these barriers reveals a complex interplay between urbanization and social equity.
Social and cultural dimensions can either enhance or inhibit green space accessibility. Community perceptions, safety concerns, and cultural norms can influence individuals' decisions to frequent these spaces. A deeper examination of these dynamics reveals the interplay between societal values and accessibility, offering insights into potential strategies to bridge gaps and increase inclusivity. Climate and weather patterns introduce another layer of complexity. Extreme heat or monsoons can influence individuals' willingness to travel to green spaces. Assessing how these seasonal variations impact different communities reveals the need for adaptable strategies that ensure year-round access.
---
Policy and Planning Implications
By providing a holistic comprehension of the socioeconomic implications associated with urban green spaces, this framework emerges as a valuable tool for urban planners and policymakers. It offers nuanced insights into optimizing the design, allocation, and management of green spaces within urban landscapes. Central to its findings is the call for integrated policies that effectively harness the potential of green spaces to enhance both the well-being of urban inhabitants and the overarching sustainability of cities. This framework underscores the pivotal role of green spaces as more than just aesthetic additions, positioning them as essential components of thriving, resilient, and socially inclusive urban environments.
The multifaceted framework, rooted in a comprehensive analysis of the socioeconomic dynamics linked to urban green spaces, holds significant implications for urban development and governance. As urban centers continue to expand, the insights drawn from this framework offer practical guidance for decision-makers. Optimizing Urban Green Space Design: The framework illuminates the intricate interplay between green spaces, community well-being, and economic vitality. It provides urban planners with a roadmap for designing green spaces that cater to diverse needs, from recreational opportunities and cultural expression to health and social interaction. By understanding the nuanced ways in which different communities engage with these spaces, planners can create environments that foster inclusivity and address local demands.
Strategic Allocation and Management: With land at a premium in urban settings, the framework's insights into the socioeconomic impacts of green spaces guide informed decisions regarding land allocation. It aids policymakers in striking a balance between commercial development and green infrastructure. Moreover, the framework advocates for strategic management that aligns with the evolving needs of communities. This approach not only enhances green space utility but also maximizes their potential to stimulate local economies.
Urban Well-being and Quality of Life: The recognition of green spaces as contributors to urban well-being is pivotal. The framework underscores how access to nature and recreational opportunities can mitigate stress, boost mental health, and enhance overall quality of life. Urban policymakers can leverage these findings to prioritize the creation and preservation of green spaces, safeguarding the health and vitality of city dwellers.
Sustainability and Climate Resilience: Embracing the insights from this framework also aligns with sustainable urban development goals. Green spaces play a crucial role in mitigating the urban heat island effect, improving air quality, and contributing to overall climate resilience. By incorporating these considerations into urban planning, policymakers can foster environments that are both socially and environmentally sustainable.
In essence, this comprehensive framework serves as a compass for urban planners and policymakers, guiding them toward the creation of cities that are not only economically vibrant but also socially inclusive, environmentally resilient, and conducive to the well-being of all residents. Its holistic perspective underscores the integral nature of green spaces in shaping the cities of tomorrow, inviting collaboration across disciplines to realize a harmonious urban future.
Certainly, let's delve further into the implications and potential applications of the framework for urban planners and policymakers:
Equitable Urban Development: The framework's emphasis on socioeconomic impacts underscores the importance of equitable urban development. It highlights the potential of green spaces to bridge social disparities by providing accessible spaces for people from all walks of life. Urban planners can utilize this understanding to ensure that green spaces are strategically located in underserved communities, addressing historical inequalities and promoting social cohesion.
Community Engagement and Empowerment: One of the framework's underlying principles is community engagement. By involving local residents in the design and management of green spaces, urban planners can empower communities to shape their environments. This not only enhances the sense of ownership but also fosters a stronger bond between residents and their neighborhoods, leading to more sustainable and resilient communities. Economic Opportunities: The framework sheds light on the economic benefits that green spaces can generate. From creating jobs in park maintenance and recreational services to boosting nearby property values, green spaces have a tangible impact on local economies. Urban planners can leverage this data to advocate for investments in green infrastructure, highlighting the potential return on investment and long-term economic growth.
Health and Well-being Initiatives: Given the growing concern about urban health challenges, the framework's insights into the positive impact of green spaces on physical and mental health are invaluable. Policymakers can use this information to support health and wellbeing initiatives. By integrating green spaces into health programs and campaigns, cities can proactively address health issues and reduce the burden on healthcare systems. Climate Change Mitigation and Adaptation: Green spaces are essential components of climate change mitigation and adaptation strategies. The framework's acknowledgment of their role in reducing urban heat, improving air quality, and enhancing resilience is crucial. Urban planners can incorporate these findings into broader climate action plans, contributing to the overall sustainability and climate readiness of the city.
Tourism and Cultural Preservation: Green spaces often possess cultural and historical significance. The framework's exploration of how green spaces contribute to cultural expression and identity opens avenues for cultural preservation and tourism. Urban planners can collaborate with local communities to design green spaces that honor heritage while providing spaces for cultural events and celebrations. Cross-sector Collaboration: The framework's multidimensional insights necessitate collaboration across sectors. Urban planners, policymakers, environmentalists, public health experts, and community advocates can unite to harness the full potential of green spaces. This collaboration extends beyond government bodies to include NGOs, academic institutions, and private sector entities, fostering innovative solutions and holistic approaches.
Long-Term Urban Vision: By integrating the framework's findings into urban development plans, cities can establish a long-term vision that prioritizes the well-being of residents. This vision goes beyond immediate gains, focusing on creating resilient, vibrant, and socially inclusive urban environments that stand the test of time. In summary, the framework's comprehensive exploration of the socioeconomic impacts of urban green spaces extends its significance beyond theoretical insights. It equips urban planners and policymakers with practical tools to craft more resilient, equitable, and sustainable cities. As cities evolve and face increasingly complex challenges, this framework offers a roadmap to navigate the intricate tapestry of urban development while prioritizing the needs and aspirations of the people who call these cities home.
The discussion of policy implications and societal shifts is pivotal. The study's insights underscore the need for policies that challenge traditional norms, promote inclusivity, and empower marginalized communities. Furthermore, the narratives suggest that societal shifts are underway, albeit with challenges. This highlights the importance of continued education, awareness campaigns, and grassroots efforts to facilitate change.
---
D. CONCLUSION
In conclusion, this in-depth analysis of gender realities underscores the intricacies of a multifaceted landscape shaped by cultural norms, evolving identities, educational empowerment, and discrimination. The findings offer a nuanced understanding of the complex web of interactions that define gender dynamics in India. The study's insights call for collaborative efforts from policymakers, civil society, and communities to challenge discriminatory norms, promote inclusivity, and work towards a more equitable and just society for all genders. | 32,634 | 1,065 |
d56f97474e690b0b65a72bc1bd8f4fa2e6dcbacf | Influence of socioeconomic factors and region of residence on cancer stage of malignant melanoma: a Danish nationwide population-based study | 2,018 | [
"JournalArticle"
] | Background: Socioeconomic differences in survival after melanoma may be due to late diagnosis of the disadvantaged patients. The aim of the study was to examine the association between educational level, disposable income, cohabitating status and region of residence with stage at diagnosis of melanoma, including adjustment for comorbidity and tumor type. Methods: From The Danish Melanoma Database, we identified 10,158 patients diagnosed with their first invasive melanoma during 2008-2014 and obtained information on stage, localization, histology, thickness and ulceration. Sociodemographic information was retrieved from registers of Statistics Denmark and data on comorbidity from the Danish National Patient Registry. We used logistic regression to analyze the associations between sociodemographic factors and cancer stage. Results: Shorter education, lower income, living without partner, older age and being male were associated with increased odds ratios for advanced stage of melanoma at time of diagnosis even after adjustment for comorbidity and tumor type. Residence in the Zealand, Central and Northern region was also associated with advanced cancer stage. Conclusion: Socioeconomically disadvantaged patients and patients with residence in three of five health care regions were more often diagnosed with advanced melanoma. Initiatives to increase early detection should be directed at disadvantaged groups, and efforts to improve early diagnosis of nodular melanomas during increased awareness of the Elevated, Firm and Growing nodule rule and "when in doubt, cut it out" should be implemented. Further studies should investigate regional differences in delay, effects of number of specialized doctors per inhabitant as well as differences in referral patterns from primary to secondary health care across health care regions. | Introduction
The incidence of melanoma in Denmark has increased with over 4% per year during the past 25 years and by 2012, the yearly incidence was ~30 per 100,000 personyears. 1 Melanoma is the fourth and sixth most common cancer type, respectively, in women and men in Denmark. 2 Despite a higher incidence rate among persons with higher socioeconomic position, lower socioeconomic position has been associated with poorer survival in this patient group, [3][4][5] and we need to know more about where in the cancer pathway these survival disparities occur. A possible explanation is delayed diagnosis in patients with lower socioeconomic position, and more knowledge is needed in order to detect cancer early in all patient groups and to identify groups at high risk of delayed diagnosis.
A late diagnosis may result in advanced cancer stage at time of diagnosis, and hypothesized explanations are delay in recognizing symptoms of the cancer, delayed health care seeking or later referral to specialized care among patients with lower socioeconomic position. The presence of other chronic disease, which is more frequent among patients with lower socioeconomic position, may influence timing of cancer diagnosis either through an increased observation because of more frequent health care contacts due to the health condition in question or conversely by decreasing individual resources in order to manage further health problems. Histological type of the tumor may also be differentially distributed according to socioeconomic group because some tumor types occur mainly among people with a certain lifestyle or risk behavior in relation to sun exposure.
Furthermore, patients with lower socioeconomic position also tend to live in more rural rather than urban areas, where access to health care services may be lower.
Several studies have shown that patients living in neighborhood areas with lower socioeconomic position tended to be diagnosed at a later stage of melanoma. 4 Besides results from two Swedish studies, 6,7 evidence is sparse from nationwide, population-based studies about the effect of individual level socioeconomic factors, such as education and income, on stage of cancer in melanoma patients. The role of comorbidity has only rarely been investigated, and only a few studies have looked at major geographical differences in combination with the socioeconomic factors.
This study presents results from Denmark where most primary and secondary health care services including all cancer treatments are tax-paid and thereby free of charge, with the aim of minimizing differential access to diagnosis and treatments. A referral from primary to secondary care is required, and the general practitioners play the role of gatekeepers to the rest of the health care system. Data are obtained from a nationwide Clinical Quality register with a coverage of ~95% of all Danish patients with melanoma in recent years 8 and unique individual socioeconomic information from national administrative registers. The aim of the study was to investigate whether educational level, disposable income, cohabitating status or region of residence is associated with cancer stage and further to analyze the role of comorbidity and tumor type in these potential relations.
---
Methods
---
Study population
From the Danish Melanoma Database (DMD), we identified 13,626 patients diagnosed with their first invasive melanoma between 2008 and 2014. DMD is a clinical register containing prospective and systematically collected data related to clinical observations, diagnostic procedures, tumor characteristics, treatments and outcomes. It was established in 1985 and now has a national coverage of ~93-96%. 8
---
Clinical variables
Information on cancer T-, N-and M-stage; tumor location; histological subtype; tumor thickness; and ulceration was obtained from the DMD. The clinical stage at diagnosis was categorized according to AJCC's 6th (2008-2013) and 7th edition (2013-2014), 9,10 and for the analyses, cancer stage was divided into early (clinical stage I-IIA) and advanced-stage cancer (clinical stage IIB-IV). This cut-point is in accordance with the Danish follow-up program for melanoma, where stage IA is assessed as low-risk cancer and IB-IIA as intermediate-risk cancer, while stage IIB-IV include the thickest tumors (stage IIB and IIC), with regional spread (stage III) or distant metastases (stage IV), all of which have the highest risk of relapse and dismal outcome. 11 Tumors were grouped into histological subtypes: superficial spreading malignant melanoma, lentigo maligna melanoma, nodular melanoma, other and unknown/unclassified. Data on comorbid conditions were obtained from the Danish National Patient Register, which is an administrative register containing data from all hospitalizations at somatic wards in Denmark since 1977. 12 Diagnoses other than melanoma were retrieved, and the Charlson comorbidity index (CCI) 13 was calculated. The CCI covers 19 selected conditions with a score from 1 to 6 by degree of severity, and these conditions were summed from 10 years before and until 1 year before date of the melanoma diagnosis. The CCI index was grouped into 0 (none), 1-2 and 3+.
---
Sociodemographic variables
Individual level sociodemographic factors were obtained by linking the unique personal identification number (assigned to all Danish residents) of the study population to the registers of Statistics Denmark, which contains data on each individual and is updated annually. [14][15][16] We retrieved information on educational level, income and cohabiting status 1 year before diagnosis for each patient.
Education was divided into three categories based on Statistics Denmark's recommendations of categorizing the individual's highest attained education level: short education (7/9-12 years of basic or youth education), medium education (10-12 years of vocational education) and longer education (short, medium and longer higher education [>13 years of education]).
Yearly disposable income per adult person in the household was calculated and categorized in to three groups based on quartiles of the disposable income per person in the population: 1st quartile (<150.708 Danish crowns [DKK]), 2nd-3rd quartile (150.708-279.715 DKK) and 4th quartile (>279.715 DKK). Persons with high negative income (>50.000 DKK) were excluded from the analyses. One thousand DKK equals ~135 Euros.
Cohabiting status was defined as living with a partner (married or cohabiting) or living without a partner (single, widow/widower or divorced). Cohabiting was defined as, in the absence of marriage, two adults of the opposite sex, with a maximum age difference of 15 years, living at the same address and who have no family relation or with a mutual child.
Information about age, sex and region of residence was obtained from the Civil Registration System.
From the study population, we excluded 105 patients because there was no match on any sociodemographic information, and further 178 persons were excluded because they had high negative income in Statistics Denmark's registers. Further 328 patients under the age of 25 years were excluded as those persons might not have reached their final educational level. This yielded 13,015 patients (Table 1). For the adjusted analysis, 2,597 patients (20%) with missing TNM information or unclassifiable clinical stage and 260 patients with unknown educational level were excluded, which resulted in a study group of 10,158 patients (Table 2).
---
Statistical analyses
The associations between socioeconomic and -demographic factors and cancer diagnosis stage were analyzed in a series of logistic regression models. First, the associations between sociodemographic factors and cancer stage were adjusted for age and sex. Second, the results were mutually adjusted for other sociodemographic factors, except for educational level, which was not adjusted for income, because income was hypothesized to be a clear mediator between education and cancer stage. Third, the model included additional adjustment for tumor type and the fourth model also adjusted for comorbidity (CCI index).
Interactions between single socioeconomic variables with sex, age, comorbidity and localization of the tumor were tested one pair at a time with Wald test statistics. A significant interaction existed between education and comorbidity with a higher effect of comorbidity on stage for patients with longer compared to short education; however, this was driven by a very small group of patients with long education level and comorbidity 3+ and therefore results were not stratified on this basis. There was an interaction between sex and cohabiting status; however, only borderline significant (P < 0.07) and sex-stratified data are not shown.
Because data completeness was higher in 2013-2014 (start of the DMD as a clinical quality register) than in 2008-2012, we repeated the analyses including only these two most recent years to assure that the interpretation of the results were close to what was found from analyzing the whole cohort.
In supplementary analyses, we repeated all the analyses with the outcome variable clinical stage dichotomized into stage I vs II-IV in order to assure that results were the same even if the cut-point for early vs advanced cancer was changed. This yielded estimates that were close to what is reported in Table 2, and the interpretation of the results from the two categorizations was the same.
The analyses were carried out in SAS 9.4 with the PROC GENMOD procedure, and the level of significance was P < 0.05.
---
Ethics
Use of data for this project was approved by the Danish Health Authorities under the Capital Region of Denmark (J.no.: 2012-58-0004).
---
Results
The descriptive statistics in Table 1 show clinical and sociodemographic factors distributed according to the main exposure of interest: educational level. More patients with short compared to long education tended to have higher cancer stages, and thereby also thicker tumors and ulceration, and more short-educated patients had nodular malignant melanoma and comorbidity. Patients with shortest education also tended to have higher age, lower income, lived alone and outside the Capital Region.
Table 2 shows that patients with shorter education, with lower income, living without partner, with male sex, higher age, with comorbidity and who lived in the Northern, Central or Zealand region of Denmark had an elevated odds ratio (OR) of being diagnosed with advanced-stage cancer when adjusted for sex, age and sociodemographic factors. For example, the OR for advanced-stage cancer in patients with short compared to longest education was 1.50 (1.25-1.67) and for lowest vs highest income level OR was 1.59 (1.33-1.89), while OR for advanced cancer stage was 1.52 (1.30-1.78) for patients living in Zealand compared to the Capital region (Table 2, model 2).
When adjusting for tumor type and comorbidity (Table 2, models 3 and 4, respectively), the ORs for advanced-stage cancer by socioeconomic and -demographic factors were only a little lower than the ORs in model 2, ie, for short vs longer education the adjusted OR was 1.40 (1.20-1.63) in the fully adjusted model. The estimates for region of residence were lower when adjusted for tumor type (model 3) than the confounder-adjusted estimates (model 2); however, this reduction in ORs was not found when restricting data to patients with diagnosis year 2013-2014 (data not shown).
Patients with high comorbidity burden had a higher OR of advanced cancer (comorbidity 3+ vs no comorbidity, adjusted OR = 1.54 [1.24-1.93]).
---
Discussion
The results of the present study showed that patients who were socially disadvantaged in terms of education, income or partner status had an increased risk of a diagnosis with advanced-staged melanoma. Region of residence was also associated with a higher risk of advanced stage when living in the Northern, Central or Zealand health care region. The effects of the socioeconomic factors seemed unexplained by differential distribution of comorbidity or tumor types among different socioeconomic groups.
It is an important finding that several different indicators of socioeconomic position were related to cancer stage at diagnosis, and this adds evidence to the current literature. Studies from USA, Europe and New Zealand consistently showed that patients living in neighborhood areas with lower socioeconomic position tended to be diagnosed with a more advanced stage of melanoma. 4,[17][18][19][20] These studies were, however, based on socioeconomic measures at area level, with the risk of misclassification. Larger differences in health outcomes may be found in populations from USA because of an insurancebased vs the mostly tax-based health care systems that exist in especially the Northern European countries, which should be considered when directly comparing inequality results. A nationwide population-based Swedish study with individually measured educational information reported a dose-response relation between three levels of education and disease stage with effect estimates close to our results. 7 Besides this, a few other smaller studies linked data on individual level education to tumor thickness, which is a measure of locally advancement of the disease and reported short education and unemployment to be associated with thick tumors. 4 Being married or living with a partner has earlier been associated with an early diagnosis of melanoma. 4,21 In a nationwide population-based Swedish study, findings of advanced disease in single living were most pronounced among men. 6 We found a similar trend of sex difference (data not shown), and especially men living without a partner seem to be a vulnerable group in terms of diagnostic delay.
A questionnaire study from USA on the link from socioeconomic position to advanced melanoma points to the following underlying reasons for such an association: patients with short education were more likely to believe that melanoma was not very serious, they had less knowledge of skin symptoms of melanoma, they were less likely to have routinely examined their skin and to have ever been told by a physician that they had atypical moles or that they were at risk of skin cancer, or had been instructed by a physician how to look for signs of melanoma. 22 However, results from older studies from the Northern Europe are conflicting on the association between socioeconomic position and knowledge and understanding of melanoma. Other studies indicate that higher socioeconomic position is associated with more use of specialist health care services in general, 23 and a lower access to specialist dermatologist or specialized hospital treatment among patients with lower socioeconomic position could be an explaining factor for their delayed diagnosis.
Taking several socioeconomic factors into account, we found that patients with residency in three out of five geographical health care regions had a higher risk of advancedstage cancer. In a recent Swedish study, differences in stage distribution were found across smaller geographical areas, 24 and further in the population-based Swedish study, rural/other urban areas had higher melanoma-specific survival compared to metropolitan areas. 7 Each of the five Danish Regions has responsibility for primary and secondary health care, and the organization of the referral to specialized care might thus be different between regions. Furthermore, the outer areas of Denmark have less primary and specialized doctors per inhabitant and longer distances to care. For instance, in the Zealand region, there is currently what corresponds to ~16 specialized treatment centers for dermatology/plastic surgery compared to ~27 centers per 100,000 inhabitants in the Capital Region. 25 That being said, region of residence may also be a mixture of unmeasured social factors and cultural/behavioral factors as well as a measure of organization of care.
Comorbidity did not seem to explain the socioeconomic difference in stage at diagnosis, although it was a significant independent risk factor for being diagnosed with advanced cancer. The findings point to lower awareness or decreased resources in terms of dealing with another health problem than the comorbid disorder. A similar association was found for melanoma screening in primary practice in France, where chronic disease was associated with non-participation. 26 A Danish population-based study showed interaction between comorbidity and cancer stage with an increased mortality among patients with advanced melanoma and high comorbidity, 27 underlining the importance of a focus on comorbidity in detection and treatment of melanoma.
We adjusted the socioeconomic and geographical results for histological type of the cancer, because it was hypothesized that some tumor types occur mostly in groups of people with a certain lifestyle or risk behavior. Lentigo maligna melanoma and superficial spreading melanoma are related to sun exposure, and sun habits could be speculated to change in a direction where more people from lower socioeconomic groups are exposed to sun or especially to use of sunbeds. 23 However, it was found that more of the patients with longest education were diagnosed with superficial spreading malignant melanoma, whereas more patients with short education had nodular melanoma -even though the risk profile of nodular melanomas is primarily related to biology rather than behavior. As nodular melanomas are often fast growing and sometimes amelanotic, increased awareness hereof is crucial. 28 Tumor type seemed to explain part of the geographical differences in cancer stage, but not when looking at the data solely from 2013 to 2014. We suggest that missing data on tumor histology in the early study period drive the finding since a larger part with unknown/unclassified histology appeared in the North and Central regions (19 and 23%, respectively, for the whole study period vs 8% in the Capital Region, data not shown), which may bias the effect of tumor type.
Strengths of the current study include the populationbased data from both a clinical database and administrative registers, which minimize selection bias, information bias and misclassification of both exposure and outcome measures.
Limitations are some missing clinical data for patients diagnosed during the years 2008-2012 (before onset of the DMD as a Clinical Quality register); however, there was an equal distribution of missing/unclassified TNM stage in the groups of patients with lower and higher socioeconomic position. Furthermore, we checked that the main results were similar for the study period as a whole as for the years 2013-2014.
To measure comorbidity, we used the CCI with summarized data of hospital diagnoses and therefore milder diseases not treated or followed up in hospital setting were not included. This may have resulted in some misclassification with the risk of an underestimation of the true effect of comorbidity on outcome.
Another limitation is that we did not have information on contacts with primary practicing doctors, which could have pointed to some explanation of why there is a socioeconomic difference in cancer stage -patient's delay in health care seeking or doctor's delay in referral to specialized care. These relations should be further investigated in future studies.
The incidence of melanoma is increasing 1 -an increase that has newly been shown across all socioeconomic groups, but with the highest increase of regional-distant disease among patients from the lowest socioeconomic areas in USA, 29 and reducing socioeconomic and sex inequalities in stage at diagnosis would result in substantial reductions in deaths from melanoma. 19 Results from our study document important socioeconomic and -demographic differences in stage at diagnosis. Initiatives should be directed to social disadvantaged groups, men and older people in order to increase awareness of symptoms of melanoma. In primary care, an increased attention should be paid to patients from these groups in order to discover skin changes or melanoma at an early stage. Additional efforts to improve early diagnosis of nodular melanomas would improve the early vs advanced ratio and thus have the potential to affect mortality significantly. The newly suggested amendment to the diagnostic ABCD rule with EFG for Elevated, Firm and Growing nodule should be applied, and "when in doubt, cut it out" should be taught to both patients and doctors. 28 Further studies should investigate regional differences in delay, effects of number of specialized doctors per inhabitant as well as different referral patterns from primary to secondary health care across health care regions.
---
Disclosure
The authors report no conflicts of interest in this work.
Clinical Epidemiology 2018:10 submit your manuscript | www.dovepress.com
---
Dovepress
---
Dovepress
---
Clinical Epidemiology
---
Publish your work in this journal
Submit your manuscript here: https://www.dovepress.com/clinical-epidemiology-journal Clinical Epidemiology is an international, peer-reviewed, open access, online journal focusing on disease and drug epidemiology, identification of risk factors and screening procedures to develop optimal preventative initiatives and programs. Specific topics include: diagnosis, prognosis, treatment, screening, prevention, risk factor modification, systematic reviews, risk and safety of medical interventions, epidemiology and biostatistical methods, and evaluation of guidelines, translational medicine, health policies and economic evaluations. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. | 21,850 | 1,846 |
eb8a9070de33e7de7f75df8c1ae113033e9a75b9 | Predictors of Retention among Men Attending STI Clinics in HIV Prevention Programs and Research: A Case Control Study in Pune, India | 2,011 | [
"JournalArticle"
] | Background: Retention is critical in HIV prevention programs and clinical research. We studied retention in the three modeled scenarios of primary prevention programs, cohort studies and clinical trials to identify predictors of retention. Methodology/Principal Findings: Men attending Sexually Transmitted Infection (STI) clinics (n = 10, 801) were followed in a cohort study spanning over a ten year period (1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002) in Pune, India. Using pre-set definitions, cases with optimal retention in prevention program (n = 1286), cohort study (n = 940) and clinical trial (n = 896) were identified from this cohort. Equal number of controls matched for age and period of enrollment were selected. A case control analysis using conditional logistic regression was performed. Being employed was a predictor of lower retention in all the three modeled scenarios. Presence of genital ulcer disease (GUD), history of commercial sex work and living away from the family were predictors of lower retention in primary prevention, cohort study and clinical trial models respectively. Alcohol consumption predicted lower retention in cohort study and clinical trial models. Married monogamous men were less likely to be retained in the primary prevention and cohort study models. Conclusions/Significance: Predicting potential drop-outs among the beneficiaries or research participants at entry point in the prevention programs and research respectively is possible. Suitable interventions might help in optimizing retention. Customized counseling to prepare the clients properly may help in their retention. | Introduction
With an estimated 2.3 million HIV infected persons, India has the third largest HIV burden in any country in the world [1]. One of the goals of the current third phase of National AIDS Control Program (NACP-III) in India is to halt and reverse the HIV epidemic by 2012 by implementing an integrated strategy focusing on prevention, care and treatment of HIV/ AIDS [2]. This goal can be achieved by maintaining the primary prevention continuum, effectively tracking the HIV incidence in various sub-populations and implementing appropriately evaluated prevention and therapeutic interventions.
Projections for the year 2031 marking 50 years of AIDS pandemic have indicated that almost three times the current resources will be required to control the epidemic by focusing on high impact tools, efforts to attain behavior-change and efficient and effective treatment [3]. All such efforts would require high level of utilization of services and programs by the stakeholders and their continued participation in the program. Retention in prevention programs, cohort studies and clinical trials is very critical and yet can be very challenging. The losses to follow-up (LTFU) might result from participants' loss of interest, inadequate oversight by the study investigators or absence of built-in mechanisms for tracking the study participants being lost [4]. Recent studies have shown that in resource poor countries, investigators can achieve high retention rates over long follow-up period in marginalized or ''hard to reach'' populations by employing special efforts which are expensive and management intensive [4,5]. Health program managers and research scientists have to take necessary steps to ensure that their clients return to the health facility at the assigned time points. Hence, understanding of dynamics of retention of clients is likely to help in planning measures to retain people in prevention programs and research settings requiring long follow-up such as cohort studies and clinical trials. Our long-term prospective study provided an opportunity to estimate levels of retention and their predictors using a modeling approach in the context of various HIV prevention and research program related scenarios such as those described below. We present three possible scenarios in the area of HIV prevention and research wherein retention is crucial:
---
1) Primary prevention through Voluntary Counseling and
Testing (VCT): We hypothesized that high uptake of voluntary counseling and testing services for HIV, an important primary prevention strategy of the National AIDS Control Program of India, would contribute to reliable estimation of HIV burden in various sub-populations and may guide in deciding strategies for secondary prevention and control of AIDS. We studied factors affecting retention in the three HIV prevention and research scenarios described above among men enrolled in a high risk cohort of patients having current or past history of sexually transmitted infections (STI) in Pune, India. We explored demographic, behavioral and biological factors that might predict retention in the modeled scenarios of primary prevention programs, cohort studies and clinical trials.
---
Methods
---
Ethics Statement
The cohort studies were approved by the national and international scientific, ethics and regulatory committees or boards of National AIDS Research Institute, India and Johns Hopkins University, USA. All participants were enrolled after obtaining written informed consent as approved by the Ethics Committee.
Between 1993 and 2002, as part of collaborative studies between National AIDS Research Institute in Pune, India and Johns Hopkins University in the United States of America, cohort studies were undertaken in the industrial city of Pune located in the high HIV prevalence western state of Maharashtra in India. Using this dataset we carried out case-control analysis to study factors affecting retention of clients in HIV prevention research and programs. The ''cases'' in the three distinct modeled scenarios were selected from the cohort of male STI clinic attendees.
The overall aim of the parent cohort study was to prepare sites and generate baseline data for undertaking Phase I, II and III HIV prevention clinical trials. Men with current or past history of STI, female sex workers (FSWs) and non sex worker females (non-FSWs) attending STI clinics were enrolled in the parent cohort study after they received their HIV negative report. Thus all those who tested HIV negative were offered enrollment in a longitudinal study requiring quarterly visits for a period of two years as described in our previous papers [6,7]. In this paper, we describe predictors of retention among men in the STI cohort using case-control analysis. Three modeled scenarios of Primary prevention, Cohort study and Clinical trials were identified as described previously.
---
Participants
''Cases'' represented individuals who were ''retained'' in the hypothetical scenarios created for the retention analysis of primary prevention, cohort studies and clinical trials described above. Age and time of recruitment matched ''controls'' were selected from the STI cohort in 1:1 ratio.
Defining outcome variable ''retention'' in three distinct modeled scenarios 1. Retention in primary prevention scenario: Individuals who returned for their first follow-up at 3 months after they received their HIV test report. 2. Retention in cohort studies scenario: Individuals who reported for follow-up to the study clinics at least once at the end of the first year and then at the end of the second year. 3. Retention in clinical trials scenario: Individuals who completed at least three scheduled visits both during the first year and the second year after enrolment.
In the parent cohort study from which this analysis is done, only standard counseling, offering HIV test and giving scheduled date for the next follow-up visit was done. No additional efforts were made to contact the participants either telephonically or through home visits to specifically improve retention.
---
Statistical analysis
Univariate and multivariate conditional logistic regression analyses were performed to identify demographic (religion, marital status, education, employment), behavioral (living away from family, alcohol consumption, number of FSW partners, age at first sex, involvement in commercial sex work) and biological (tattooing, diagnosis of various types of STI, syndromic diagnosis of genital ulcer and discharge type of diseases) factors associated with retention in the three modeled scenarios respectively. The comparison of baseline characteristics of individuals in all the three scenarios was done using Chi-square or Fisher's Exact test whichever was applicable. The variables that were found to be significantly associated with retention in the univariate models were retained in the multivariate models. As an exception, the variable 'number of FSW partners' although not significant in the univariate model, was retained in the multivariate model due to its known relationship with retention [8,9]. Forest plots in excel software were used to generate figures (1a -c) for multivariate analysis [10]. Data was analyzed using intercooled STATA version 10.0.
---
Results
Between 1993 and 2002, a total of 14,137 individuals visited the STI clinics in this study. Of these, 10,801 (76%) were men, 3252 (23%) were women and 83 (0.5%) were eunuchs or trans-genders. Of all the 10, 801 screened male STI patients, 8631 (80%) were found to be HIV uninfected who were enrolled in the parent cohort study. The present case -control analysis is restricted to these enrolled men.
Most of the men were employed (89%), belonged to Hindu religion (81%), were living with their families (77%) and nearly 50% were 'ever married'. More than half of these men reported history of alcohol consumption and 84% reported having FSW contact in the lifetime. The median age at initiation of sex among them was 19 years. Thirty two percent of the men presented themselves with the diagnosis of genital ulcer disease (GUD) (Data not shown in the tables).
---
Profile of men in case control analysis in three modeled scenario
A total of 1286, 940 and 896 cases and equal number of matched controls were considered in three respective modeled scenarios of primary prevention, cohort study and clinical trials (Table 1). Cases and controls differed significantly for various baseline demographic and behavioral characteristics.
---
Predictors of retention in scenario 1: primary prevention
Marital status, education, employment, diagnosis of GUD and diagnosis of any STI were found to be associated with retention in the univariate analysis (Table 2). In the multivariate analysis (Fig. 1a), men who were married and monogamous (p = 0.03), employed (p = 0.02) and those with the clinical diagnosis of GUD (p = 0.04) were less likely to return for the follow up visit. In contrast, male STI patients reporting higher level of education (p,0.001) and those who had more than three FSW partners were more likely to report back for follow-up (p = 0.03).
---
Predictors of retention in scenario 2: cohort study
In the univariate analysis, marital status, education, employment, alcohol consumption and involvement in sex work were observed to be associated with retention (Table 1). In the multivariate analysis (Fig. 1 b), men who were married monogamous (p = 0.001), employed (p = 0.001), who gave history of alcohol consumption (p = 0.002) or those who were involved in sex work (p = 0.001) were 30% less likely to be retained in the cohort study. All these variables were found to be independent predictors of lower retention. However, men who were educated to high school and beyond were almost 2 times more likely to be retained in the cohort study scenario (p,0.001).
---
Predictors of retention in scenario 3: clinical trials
Marital status, living away from the family, education, employment, alcohol consumption, number of FSW partners, age at first sexual intercourse and diagnosis of STI were significantly associated with retention in the clinical trial scenario in the univariate analysis (Table 1). In the multivariate analysis, independent predictors of retention were living away from the family (p = 0.04), being employed (p = 0.003) and habit of alcohol consumption (p,0.001). More educated male patients or those who had more than three FSW partners or those who initiated sex at an older age were almost 1.5 times more likely to be retained and maintain rigorous follow-up schedule of a clinical trial scenario (Fig. 1c).
---
Discussion
We have used data from large cohort studies on STI patients in Pune, India in modeled scenarios to study the extent of retention and determinants of retention in male STI patients that constitutes an important bridge population in HIV transmission in India [11]. We have identified demographic, behavioral and biological factors that might predict adherence/ non adherence of male STI patients to suggested visit schedules. We expect that this knowledge would be very useful to design specific strategies that might assist in optimizing retention in HIV prevention research and programs. It is possible to identify potential defaulters for retention and implement appropriate interventions. This might be less expensive than tracking patients or research participants after enrollment.
Being employed was a common predictor of lower retention across all the three study models. Level of education showed likelihood of retention across all three modeled scenarios. Education level among high risk men in India is low [12][13]. Additional efforts are required to be made for the less educated or illiterate men to effectively retain them in primary prevention programs and clinical trials. Similar observations have been made in other studies among men who have sex with men [14][15][16]. Our observation also corroborated with a similar observation in NIMH HIV prevention trial [17]. As majority of VCT center attendees in the Government sector facilities in India are less educated [18], special efforts to improve their retention in primary prevention will be required. Additionally, we observed that retention was less among employed men although the education level is expected to be high among them. Paucity of time could be the logical limiting reason for employed men to come for repeated follow-up visits as reported by several investigators [19][20][21]. To facilitate retention, it might be necessary to keep the health facilities and research clinics open and available out of routine work hours. Presence of GUD, history of commercial sex work and living away from the family were predictors of lower retention in primary prevention, cohort study and clinical trial models respectively. Alcohol consumption predicted lower retention in the cohort study and clinical trial models while the married monogamous men had lower likelihood of retention in the primary prevention and cohort study models.
It is well known that in therapeutic programs, benefits are generally immediate and more readily visible. In contrast, success of prevention programs lies in better, sustained and prolonged utilization of services which indicates 'retention needs'. Retention in primary prevention and allied research is expected to be dependent on many factors and strategies such as retention counseling, quality of delivery of programmatic and research activities, and participant related factors such as motivation, costs and time required to be spent by them. As the prevention programs mature and new prevention trials are undertaken, the need to identify potential drop outs has to be addressed on priority. Optimizing retention of the end-users is crucial for assessing efficacy [22] and hence strategies should be considered to address various factors influencing retention during implementation of prevention programs and research. Predictors of retention identified in the study could be used for developing an instrument to identify the clients who are likely to fail to return for required follow-up visits either in prevention program or in prevention research. Using such an instrument could be a cost effective strategy to minimize 'drop-outs' rather than using expensive measures to track participants or patients who are lost to follow-up later.
It has been suggested that both prevention and adherence science need to expand beyond individual boundaries to learn more about motivational and structural strategies that can be applied to large populations so that prevention technologies have adequate time to prove useful when implemented in the communities [5]. Therefore it is relevant to explore individual factors as well as those related to individual's family or societal environment that can prevent retention in prevention or research programs.
Poor sexual health seeking behavior among men despite their high risk behavior poses a grave challenge [23]. We observed that married men, who were monogamous, were less likely to be retained in prevention programs and cohort study scenarios in this study. The precise reasons for this observation may have to be explored through qualitative studies. Important role of spouses in men's health seeking has been reported [24]. Several studies have also reported that men who are living away from spouse as well as divorced or single individuals have high risk behaviors [12] and higher dropout rate from the offered prevention umbrella [25][26][27][28]. Our observation that men who were 'living away from family' were less likely to be retained in the clinical trials scenario provides supporting evidence to this possibility. All these observations are strongly suggestive of better health seeking by men having family support. We feel that couple centered approach and involvement of female partners in male oriented programs may contribute to the success of program for men. However, this approach has an inherent limitation that men will have to share information about their health and sickness with their spouses. Counseling sessions in programs and research could focus on specifically discussing the role of spouses and families not only in improving health seeking, but also in keeping up with the visit schedules of programs or studies they are participating in.
Among the behavioral characteristics, those men who reported having more than three female sex worker partners were more likely to return for follow-up in the primary prevention and clinical trial scenarios. This probably reflects men's 'self perception' about their risk behavior. Health seeking in terms of regular and frequent follow up is perhaps better among men practicing high risk behavior. Focused attention would be required to be given on men reporting high risk behavior less frequently. There is an opportunity to effectively intervene to achieve behavioral change through prevention programs.
In India, male commercial sex work is all but invisible and not much is currently known about the status of male sex workers although some studies have reported high HIV prevalence among them indicating a need to develop new [29], innovative interventions targeted towards men in commercial sex work. In the present study among male STI patients, men reporting commercial sex work were less likely to be retained in the cohort study scenario. This is a high risk population and a reliable estimate of HIV incidence in this category of men is an important public health need. Additionally this population would also be targeted for Phase IIb or III studies of HIV prevention technologies and their retention in future clinical trials would be very critical. Lower age at sex initiation has been reported to be associated with early HIV infection in this cohort [30]. Hence, emphasis should be given on targeting younger men in prevention programs and ensuring their continued retention in the programs to sustain safer behavior. Alcohol intake has been reported as a predictor of non-retention in several studies [17,31,32]. It was no surprise to find that men who gave a history of alcohol consumption were less likely to be retained in our study as well. Long term commitment might be a challenge in cases of alcohol addiction. It might be important to emphasize on identification of alcohol consuming behavior at the entry point of prevention settings and making special efforts to ensure retention of alcohol consuming individuals under the HIV prevention umbrella.
The diagnosis of GUD was an independent predictor of return for a follow-up visit within 3 months of enrollment i.e. primary prevention scenario. This observation has specific public health significance because it provides opportunities and complete treatment of GUD and appropriate counseling for behavior change. We have already reported decline in HIV acquisition risk with decline in GUDs [7]. GUDs are ''visible or noticeable'' STI that could motivate a person to seek further medical advice and hence such individuals are probably more likely to return to the study clinics. However, it has been reported that non-GUD STIs are also associated with high HIV prevalence [33][34]. Hence, it is advisable that men with clinically invisible or non-apparent STIs should also be targeted for HIV prevention interventions and retention counseling. Interactive counseling approaches directed at a patient's personal risk, the situations in which such a risk is likely to occur and the use of goal-setting strategies are effective in STI/ HIV prevention [35]. Shepherd et al [36] have provided evidence that by enhancing access to treatment and interventions through mechanisms such as counseling, education, and provision of condoms for prevention of STIs, especially GUD among disadvantaged men, the disparity in rates of HIV incidence could be lessened considerably. As part of the clinical interview, healthcare providers should routinely and regularly obtain sexual histories from their patients and plan retention management measures along with implementing measures for risk reduction. It is important to ensure that the clients continue to practice safe behavior through sustained follow-up.
We recommend that counselors working with participants and beneficiaries of research studies and program should specifically take into consideration clients' occupation, current marital relationship, habit of alcohol consumption, possibility of non-GUD STI, and identify cases that may have a potential for being lost to follow-up. This strategy may prove to be cost effective, less cumbersome and easier to ensure high retention. In future, the identified predictors in this study could be used to develop a counseling check-list with measurable indicators of failure in retention. Such a tool would require validation studies in prevention programs and clinical trial settings.
The recruitment of participants in this study was through public sector based STI clinics which is a limitation for generalizability of the findings. The profiles of clients visiting the public and private sector facilities available are known to be different [37][38]. Since VCT was primarily offered in a research context in this study, lessons learnt may have some limitations in terms of applicability to primary prevention programs rolled out to masses. Hence the predictors of retention identified in this study will have to be understood appropriately in context of the patients receiving health care in other facilities. Secondly, the study essentially involves men and in India men are not only the key decision makers in the community and families but also the major contributors to transmission of HIV in India [20,39]. The National Family Health Survey III data [40] in India has shown that 10-15% of Indian men are at risk of HIV infection. Hence studies to identify predictors of retention among men gains significance. However, the predictors of retention among women are likely to be different and they must be explored.
We conclude that achieving high levels of retention and preventing drop outs was a challenge in case of all the three scenarios of primary prevention, cohort studies and clinical trials. The knowledge about identified predictors of sub-optimal retention could be useful in developing appropriate retention checklists or tools in case of the above-mentioned prevention and research programs to minimize potential drop-outs.
---
Author Contributions
| 22,551 | 1,644 |
550e1af2cba17734922f384cc2c6719ab8315c70 | Combined Effects of Race and Socioeconomic Status on Cancer Beliefs, Cognitions, and Emotions | 2,019 | [
"JournalArticle",
"Review"
] | Aim: To determine whether socioeconomic status (SES; educational attainment and income) explains the racial gap in cancer beliefs, cognitions, and emotions in a national sample of American adults. Methods: For this cross-sectional study, data came from the Health Information National Trends Survey (HINTS) 2017, which included a nationally representative sample of American adults. The study enrolled 2277 adults who were either non-Hispanic Black (n = 409) or non-Hispanic White (n = 1868). Race, demographic factors (age and gender), SES (i.e., educational attainment and income), health access (insurance status, usual source of care), family history of cancer, fatalistic cancer beliefs, perceived risk of cancer, and cancer worries were measured. We ran structural equation models (SEMs) for data analysis. Results: Race and SES were associated with perceived risk of cancer, cancer worries, and fatalistic cancer beliefs, suggesting that non-Hispanic Blacks, low educational attainment and low income were associated with higher fatalistic cancer beliefs, lower perceived risk of cancer, and less cancer worries. Educational attainment and income only partially mediated the effects of race on cancer beliefs, emotions, and cognitions. Race was directly associated with fatalistic cancer beliefs, perceived risk of cancer, and cancer worries, net of SES. Conclusions: Racial gap in SES is not the only reason behind racial gap in cancer beliefs, cognitions, and emotions. Racial gap in cancer related beliefs, emotions, and cognitions is the result of race and SES rather than race or SES. Elimination of racial gap in socioeconomic status will not be enough for elimination of racial disparities in cancer beliefs, cognitions, and emotions in the United States. | Background
Race [1,2] as well as socioeconomic status (SES) [3-10] impact health outcomes. There is, however, a debate regarding whether the effects of race on health outcomes are fully due to lower SES of the minority groups or not [11][12][13][14][15][16][17][18][19]. Similarly, while race [20][21][22][23][24][25][26][27] and SES [28,29] both impact cancer incidence and outcomes, it is still unknown to what degree SES explains the racial gap in cancer outcomes [30][31][32][33][34][35][36]. While poor cancer beliefs are more common in racial minority groups as well as individuals with low SES [37][38][39][40], we still do not know whether all of the racial disparities in cancer beliefs are due to SES differences between races or race influences cancer beliefs above and beyond SES. A considerable amount of research suggests that SES only partially explains the racial differences in health [11][12][13][14][15][16][17][18][19], a finding which is also shown for cancer outcomes [24,25,[30][31][32][33][34][35][36].
In other terms, it is still unknown whether it is race and SES or race or SES which shape cancer disparities [11,24,25]. If it is race or SES, then racial differences in cancer outcomes are fully explained by SES. In such case, eliminating SES gap across racial groups would be enough for elimination of racial gap in cancer outcomes. If it is race and SES, however, SES would only partially account for racial differences in outcomes [12][13][14][15][16][17][18][19]. In this case, elimination of racial gap in cancer outcomes would require interventions and programs that go beyond equaling SES across racial groups [41][42][43][44][45]. Residual effect of race that is above and beyond SES may be due to racism, discrimination, and culture [41,45]. Thus, understanding of whether SES fully mediates the effect of race on cancer outcomes has practical implications for public and social policy, public health programs, as well as clinical practice.
A considerable body of research has shown that low SES people and Black individuals have a higher risk of cancer [35,[46][47][48], probably due to environmental exposures and behaviors such as poor diet, drinking, and smoking [49]. At the same time, Blacks and low SES people have lower health literacy [50] and a lower trust to the health care system as well as a lower perceived risk of cancer [51,52]. As a result, compared to high SES and White individuals, Black and low SES people have a lower tendency for cancer screening behaviors [53]. As a result, when diagnosed with cancer, cancer is at a more advanced stage, which reduces survival, and worsens the prognosis [35]. These all result in what we know as racial and SES disparities in cancer outcomes [35,[46][47][48]54].
Aims: To expand the current knowledge on this topic, we used a national sample of American adults to test the separate and additive effects of race and SES on fatalistic cancer beliefs, perceived risk of cancer, and cancer worries. The implication of such knowledge will help with designing and implementing the most effective policies, programs, and practices that may eliminate the racial and SES gaps in cancer beliefs, cognitions, and emotions.
---
Methods
---
Design and Setting
Using data from the Health Information National Trends Survey (HINTS-5, Cycle 1, 2017), this was a cross-sectional study. HINTS is a national survey which has being periodically administered by the National Cancer Institute (NCI) since 2003. The HINTS study series provides a nationally representative picture of Americans' cancer related information [55]. HINTS-5, Cycle 1 data were collected between January and May 2017 [56][57][58].
---
Ethics
All participants provided informed written consent. The Westat's Institutional Review Board (IRB) approved the HINTS-5 study protocol (Westat's Federal wide Assurance (FWA) number = FWA00005551, Westat's IRB number = 00000695, the project OMB number = 0920-0589). The National Institute of Health (NIH) Office of Human Subjects exempted the HINT from IRB review.
---
Sampling
The HINTS sample is composed of American adults (age ≥ 18) who were living in the US and were not institutionalized. The HINTS-5, Cycle 1 used a two-stage sampling design in which the first stage was a stratified sample of residential addresses. Any non-vacant residential address was considered eligible. The address list was obtained from the Marketing Systems Group (MSG). In the second sampling stage, one adult was sampled from each selected household. The sampling frame composed of two strata based on concentration of minorities (areas with high and areas with low concentration of racial and ethnic minorities). Equal-probability sampling was applied to sample households from each stratum [55].
---
Surveys
The surveys were mailed to the participants' addresses. A monetary incentive was given to the participants (included in the mails) to increase the participation rate. Two specific toll-free numbers were provided for the respondents to call: one number for English calls and one number for Spanish calls. The overall response rate was 32.4% [55].
---
Study Variables
The study variables included race, age, gender, educational attainment, income, history of cancer in family, health insurance status, cancer worries, fatalistic cancer beliefs, and perceived risk of cancer. Outcome measures included cancer beliefs, perceived risk of cancer, and cancer worries. Race/ethnicity was the independent variable. Educational attainment and income were mediators. Age, gender, history of cancer in family, and health insurance status were covariates.
---
Independent Variable
Race/ethnicity. Race/ethnicity was the independent variable of interest. Race/ethnicity was treated as a dichotomous variable (0 non-Hispanic Whites, 1 non-Hispanic Blacks).
---
Covariates
Demographic Factors. Age and gender were the demographic covariates. Age was an interval measure ranging from 18 to 101. Gender was treated as a dichotomous variable (0 female, 1 male).
Health Insurance Status. Availability of health insurance was measured using the following insurance types: (1) Insurance purchased from insurance companies; (2) Medicare (for people 65 and older, or people with disabilities); (3) Medicaid, Medical Assistance, or other government-assistance plans; (4) TRICARE and any other military health care; (5) Veterans Affairs; (6) Indian Health Services; and (7) any other health coverage plan. Insurance status was operationalized as a dichotomous variable (0 no insurance, 1 any insurance, regardless of its type).
Family History of Cancer. History of cancer in the family was asked using the following single item. "Have any of your family members ever had cancer?" The answers included yes, no, and do not know.
---
Dependent Variables
Fatalistic Cancer Beliefs. Fatalistic cancer beliefs were measured using the stem "How much do you agree or disagree with each of the following statements?" followed by the following items:
(1) There's not much you can do to lower your chances of getting cancer"; (2) "It seems like everything causes cancer"; (3) "There are so many different recommendations about preventing cancer, it's hard to know which ones to follow"; and (4) "When I think about cancer, I automatically think about death". Answers included four-response Likert items ranging from strongly disagree to strongly agree. A sum score was calculated, with a possible range from four to sixteen. Fatalistic cancer beliefs were operationalized as an interval measure, with higher scores reflecting higher fatalistic beliefs [59].
Perceived Risk of Cancer. Perceived risk of cancer was measured using the following item: "How likely are you to get cancer in your lifetime?" Responses were on a five item Likert scale ranging from (1) very unlikely to (5) very likely. Perceived risk of cancer was operationalized as an interval measure, with a higher score indicative of higher perceived cancer risk [60].
Cancer Worries. Cancer worries were measured using the following item: "How worried are you about getting cancer?" Responses were on a 5-item response, items from (1) not at all to (5) extremely high. Cancer worries were operationalized as an interval measure, with a higher score indicating more cancer worries [61].
---
Mediators
Educational Attainment. Educational attainment, one of the main SES indicators, was the mediator in this study. Educational attainment was treated as an interval variable ranging from 1 to 5: (1) less than high school graduate, (2) high-school graduate, (3) some college education, (4) completed bachelor's degree, and (5) having post-baccalaureate degrees. Educational attainment ranged from 1 to 5, with a higher score indicating higher SES.
Income. Income, one of the most robust SES indicators, was the other mediator in this study. Income was treated as an interval variable ranging from 1 to 5: (1) Less than $20,000; (2) $20,000-34,999;
(3) $35,000-49,999; (4) $50,000-74,999; (5) $75,000 or more. Income ranged from 1 to 5, with a higher score indicating higher SES.
---
Statistical Analysis
For data analysis, we used Stata 15.0 (Stata Corp., College Station, TX, USA). For our univariate analysis, we reported mean or relative frequencies (proportions) with their standard errors (SE). For multivariable analysis, we ran three structural equation models (SEM) [62], one model for each outcome. Specific models were fitted for fatalistic cancer beliefs, perceived risk of cancer, and cancer worries. Race was the main independent variable. Gender, age, insurance status, and having a family member with cancer were the covariates. Educational attainment and income were the mediators. To test whether educational attainment and income fully explain the effect of race on outcomes, we ran models in the pooled sample, without and with educational attainment and income as mediators. Path coefficients, SE, 95% CI, z-value, and p-values were reported. SEM uses maximum likelihood estimates to handle missing data [63,64]. Conventional fit statistics such as the comparative fit index (CFI), the root mean square error of approximation (RMSEA), and Chi-square to degrees of freedom ratios were used. A Chi-square to degrees of freedom ratios of less than 4.00, a CFI more than 0.90, and a RMSEA of less than of 0.06 were indicators of good fit [65,66].
We did not define our mediators and outcomes as latent factors for several reasons. First, income is one and not all of the underlying mechanisms by which education improves health and behaviors. Due to labor market discrimination, differential correlations exist between educational attainment and income across racial groups. Overall, education attainment has a stronger correlation with income in Whites, as their education is more strongly rewarded by the society by high paying jobs [67][68][69]. As our findings showed, education and income differently functioned as partial mediators of the effect of race on our outcomes. Similarly, unique patterns of determinants were found for each of our outcomes, supporting our decision not to conceptualize SES and our outcomes as latent factors. Despite not having latent factors, our decision to use SEM for data analysis was based on the following advantages of SEM compared to regression models: (1) SEM more efficiently uses data, in the presence of missing data, (2) SEM enabled us to decompose the effects of race on education, income, and also the direct effects on our outcomes, (3) the error variance of the education and income were correlated, which is a feature not available in regression analysis.
---
Results
---
Descriptive Statistics
Table 1 summarizes descriptive characteristics among the participants. Participants had an average age of 49 years (SE = 0.34). Almost half (52%) of the participants were females. From all participants, 87% were non-Hispanic White and 13% were non-Hispanic Black. About 92% of the participants had insurance.
---
Bivariate Correlations
Race was correlated with age, education attainment, and income. Education attainment was positively correlated with income and negatively correlated with fatalistic cancer beliefs. Income was also negatively correlated with fatalistic cancer beliefs. Cancer worries and perceived risk of cancer were positively correlated, however, Cancer worries and perceived risk of cancer were not correlated with fatalistic cancer beliefs (Table 2).
---
Fatalistic Cancer Beliefs
Model 1 was performed for cancer beliefs, which showed an acceptable fit (chi 2 = 97.276, p < 0.001, CFI = 0.923, RMSEA = 0.06). According to this model, race (b = 1.68; p < 0.001), educational attainment (b = -0.65; p < 0.001), and income (b = -0.33; p < 0.001) were all associated with cancer beliefs. Black, low educated, and low-income individuals had worse cancer beliefs. This model showed that SES indicators only partially mediate the effect of race on poor cancer beliefs. Race was directly associated with poor cancer beliefs, on top of its indirect effects through low educational attainment and low income (Table 3, Figure 1A).
---
Perceived Risk of Cancer
Model 2 was performed for perceived risk of cancer as the outcome. This model showed an acceptable fit (chi 2 = 95.541, p < 0.001, CFI = 0.914, RMSEA = 0.06). According to this model, race (b = -0.55; p < 0.001) and income (b = 0.07; p = 0.005) but not educational attainment (b = 0.02; p = 0.714) were associated with perceived risk of cancer, with non-Hispanic Blacks and those with low income reporting lower perceived risk of cancer. This model showed that low income only partially mediates the effect of race on perceived risk of cancer. That is, race was directly associated with perceived risk, in addition to showing an indirect effect through income levels (Table 4, Figure 1B). mediates the effect of race on perceived risk of cancer. That is, race was directly associated with perceived risk, in addition to showing an indirect effect through income levels. (Table 4, Figure 1B).
---
Cancer Worries
Model 3 was performed with cancer worries as the outcome. This model showed an acceptable fit as well (chi 2 = 94.999, p < 0.001, CFI = 0.917, RMSEA = 0.06). According to this model, race (b = -0.36; p < 0.001) but not educational attainment (b = -0.05; p = 0.126) or income level (b = 0.02; p = 0.232) was associated with cancer worries. According to this model, non-Hispanic Black individuals had lower cancer worries, net of their SES. Based on this model, SES indicators did not mediate the effect of race on cancer worries. We found that race is directly associated with cancer worries, independent of educational attainment or income level (Table 5, Figure 1C).
---
Discussion
In a nationally representative sample of Non-Hispanic White and Black American adults, this study found that SES does not fully explain the racial differences in fatalistic cancer beliefs, perceived risk of cancer, and cancer worries. That is, race has direct effects on cancer related cognitions, emotions, and perceptions, that go beyond its effect on SES. As a result, elimination of SES gaps would not be enough for elimination of racial gap in cancer outcomes.
Low SES individuals and Blacks are at an increased risk of cancer, compared to high SES and White individuals [20,28]. Despite their higher risk, they have less accurate cancer beliefs, lower perceived risk of cancer, and less cancer worries [56,57,[70][71][72][73]. This pattern suggests that Blacks may discount their risk of cancer, possibly to minimize their cognitive dissonance, particularly because cancer results in high levels of fear in them [74][75][76][77][78]. These psychological processes may contribute to low uptake of cancer screening, possibly due to avoiding cancer anxiety and worries [74][75][76][77][78][79][80]. Blacks experience other types of adversities. For instance, while age increased Whites' chance of having a conversation about lung cancer with their doctors, Blacks' chance of discussing lung cancer with their doctor did not increase due to ageing, which may increase the risk of undiagnosed cancer in high risk Black individuals [58]. In another study, perceived risk of cancer was associated with higher cancer screening for Whites but not Blacks [21]. It is shown that elimination of racial disparities in cancer screening may contribute to the elimination of disparities in cancer outcome, particularly mortality [23]. This combination makes the health and well-being of Black and low SES individuals at jeopardy. At the same time, this combination imposes enormous costs to the US health care system, directly and indirectly. This is not only paradoxical but troubling. Being at high risk of cancer, combined with fatalistic cancer beliefs, low perceived risk of cancer, low cancer worries, poor cancer knowledge, low self-efficacy regarding cancer prevention is a real public health and policy challenge [37][38][39][40]61,81,82]. This challenging reality invites policy makers, public health practitioners, and clinicians to over-invest on enhancing cancer beliefs, cognitions, and emotions of low SES and Black individuals; the groups that most need these interventions, but at the same time lack them. That means, instead of universal programs, we need interventions that disproportionately target the low SES and Black individuals, instead of universal investments.
If SES could fully explain (i.e., mediate) the effects of race on health, then reducing socioeconomic disparities between racial groups would be easier, as it would be able to fully eliminate the racial inequalities in health through equalizing access of racial groups to SES resources [11]. But the reality is that such efforts, while effective, are not enough [11,41,42]. We are not arguing that such efforts are not needed, or they are not effective in reducing the racial gaps. Instead, our argument is that these differences would not be eliminated if the only focus is SES. Still, despite equal SES, racial groups will show differential outcomes [41,42]. This is mainly because SES better serve Whites than non-Whites particularly Blacks, and high SES Blacks still have high health needs [43][44][45][83][84][85]. This disadvantage of Blacks, also known as "Minorities Diminished Returns", suggests that we tend to over-estimate the effects of enhancing SES on racial disparities [43][44][45]84,85]. The ultimate solution to racial disparities includes policies that focus on racism and structural aspects of the society, rather than merely addressing racial gaps in access to SES resources [86][87][88][89][90][91].
Racism and discrimination are possible causes why racial minority groups have worse cancer beliefs, cognitions, and emotions, above and beyond SES. Another explanation for this phenomenon may be health literacy, and cancer literacy, in particular [33,92]. Finally, some of the racial and ethnic differences in cancer beliefs, cognitions, and emotions may be due to culture [93][94][95]. Additional research is needed to decompose the role of structural and social factors, culture, and knowledge (e.g., health literacy) in racial differences that are beyond SES differences. Stigma, mistrust, and fear should not be left behind when we address race and SES disparities in cancer emotions and cognitions [96].
Elimination of SES differences across racial groups is not enough for elimination of racial gap in health, and cancer is not an exception to this rule. The effect of race outside of SES is mainly due to racism and discrimination. Society unequally treats racial groups, based on their skin colors, and any non-White group is perceived as inferior, and is discriminated against. Discrimination is a known risk factor for poor health [97,98]. Barriers beyond SES should not be ignored as a major cause of racial disparities in cancer outcomes [99]. Mass media campaigns enhance cancer control via cancer education that target marginalized groups. Such efforts should simultaneously target racial minorities and low SES people, instead of merely focusing on either SES or race. Addressing one and ignoring the other may not be the optimal solution to the existing problems.
Cancer related cognitions, emotions, and perceptions have major implications for prevention and screening. Seeking services, as well as pro-health behaviors collectively reduce prevalence and burden of cancer [93]. Such cognitions, emotions, and perceptions are among the reasons Blacks and low SES individuals have higher cancer risk, are at risk of late diagnosis, receive late diagnoses, have lower adherence to cancer screening and treatment, and die more often from cancer [93]. According to this study, race and SES jointly cause disadvantage in cancer outcomes through their effects on cancer cognitions, emotions, and perceptions. All these processes in turn contribute to the disproportionately high risk of cancer as well as high burden of cancer in low SES and Black individuals [100].
Poor access to the health care system may partially explain poor cancer outcomes of marginalized groups including low SES and Black individuals [95]. This study checked for two indicators of access to the health care. Although we did not directly measure stigma, our SES constructs correlate with stigma. Thus, our study may have indirectly captured the confounding role of access and stigma. This argument is based on the fact that individuals who regularly use health care have lower stigma and higher trust toward the health care system and health care providers [101]. Low SES individuals and Blacks have higher stigma and lower trust to the health care system [102], which is one of the reasons they have worse cancer beliefs, cognitions, and emotions as well as cancer burden [103].
---
Study Limitations and Future Research
Current study had some limitations. First, the sample size was disproportionately lower for Blacks, which may have implications for statistical power. To solve this issue, we ran all of our models within the pooled sample, rather than running models across racial groups. Second, the study was cross-sectional in design. We cannot infer causation but association. Third, this study only included individual level factors. Fourth, this study missed some potential confounders such as history of cancer. Fifth, some of the study constructs were measured using one or only a few outcomes. There is a need to study using more sophisticated and comprehensive measures that have higher reliability and validity. There is also a need to study whether these patterns differ for age groups, and cohorts. Finally, there is a need to replicate these findings for each type of cancer, and for other race and ethnic groups. Despite these methodological and conceptual limitations, this study still makes a unique contribution to the existing literature on additive effects of race and SES, on cancer beliefs, cognitions, and emotions.
The current study was limited in how it measured the dependent variables namely cancer beliefs, cancer perceived risk, and cancer worries. Cancer beliefs were measured using the following items: (1) "There's not much you can do to lower your chances of getting cancer", (2) "It seems like everything causes cancer", (3) "There are so many different recommendations about preventing cancer, it's hard to know which ones to follow", and (4) "When I think about cancer, I automatically think about death". While all of these four items also reflect "fatalistic cancer beliefs", some of these items at the same time also reflect confusion about cancer information or low perceived self-efficacy in preventing cancer ("There's not much you can do to lower your chances of getting cancer", "There are so many different recommendations about preventing cancer, it's hard to know which ones to follow". The wording of some of the items may also be problematic. For example, we do not know whether the item # 2 is taken literally or not. Particularly because of the term "seems", this item may simply suggest that there is a barrage of information out there that is hard to interpret. Item # 3 reflects cancer misbelief but may also reflect poor self-efficacy in determining the validity of cancer information. Due to the surfeit of information from various sources that are available, it can be hard for many individuals to assess the validity of the information. These items may be confounded by a sense of frustration about own ability to determine the validity of certain claims, some of which are well known for having been reversed, even by top medical facilities. The item # 4 reflects cancer beliefs but may also be an indication of the fear associated with cancer. It may or may not literally mean that all cancer diagnoses are lethal.
---
Implications
The results reported here have major implications for research, practice, and policy making. The results advocate for looking beyond SES as a root cause of cancer disparities across racial groups in the US. Although SES is one of the major contributors of racial disparities in cancer, it is not the sole factor. Racial disparities in cancer are the results of race and SES rather than race or SES. Therefore, US policies should address social and structural processes and phenomena such as racism as well as poverty and low educational attainment. Elimination of racial disparities in cancer is not simply achievable via one line of interventions that focus on SES. Instead, multi-level solutions are needed that address race as well as SES. Policies that only focus on economic and social resources are over-simplistic and will not eliminate the sustained and pervasive disparities by race and SES [41,42].
---
Conclusions
To conclude, only some of the racial disparities in cancer beliefs, cognitions, and emotions are due to racial differences in SES. Policy makers, practitioners, public health experts, and researchers should consider race as well as SES as factors that jointly cause disparities in cancer outcomes. Racism, discrimination, culture, access to the health care system, and other individual and contextual factors may have a role in shaping racial disparities in cancer outcomes.
---
Author Contributions: S.A.: conceptual design, analysis, manuscript draft. P.K. and H.C.: interpretation of the findings, revision.
---
Conflicts of Interest:
The authors declare no conflict of interest. | 26,464 | 1,769 |
3a467042a28b331c5cd0ba996ef96b27f243348d | Climate change communication in India: A study on climate change imageries on Instagram | 2,023 | [
"JournalArticle"
] | The rising accessibility of mobile phones and the proliferation of social media have revolutionized the way climate change has been communicated. Yet, the inherent invisibility and temporal complexities of climate change pose challenges when trying to communicate it on visual media platforms. This study employs visual content analysis to investigate how environmental nongovernment organizations (NGOs) in India address these limitations on their Instagram pages. Four environmental NGOs based in India were selected, and their thirty most recent Instagram posts related to climate change were analyzed based on imagery type, subject, context and themes. The findings revealed that these NGOs employed a diverse range of climate change imageries, often accompanied by overlaying texts, to traverse the lack of standardized visual tropes. Moreover, it is noted that a significant majority of analyzed Instagram imageries following the visual principles advocated by Climate Outreach emerged from one single NGO account, suggesting potential variations in the visual communication strategies among different NGOs. | Introduction
Climate Change is a burning problem affecting all countries across the globe. Being one of the most vulnerable countries and one of the largest Green House Gas emitters, addressing climate change is a complex policy issue in India (Thaker, 2017). While, the impacts of climate change are becoming more obvious in recent years in the form of flash floods, cyclones, droughts, or landslides and are predicted to be even worse in the coming years. In times of such climate emergency, it becomes crucial to look into how actors -scientists, activists, journalists and environmental NGOs-communicate this issue.
Research over the years has positioned media as the focal point of climate change communication as publics' understanding and engagement of the issue mostly based on how media represent it (Carvalho, 2007;Junsheng et al., 2019;Wolf & Moser, 2011, p. 2). The transition from traditional media to social media has opened up the new ways of communicating and engaging the general public about a range of topics. Yet, making climate change meaningful to the masses has been proven challenging (DiFrancesco & Young, 2011). Despite all the communication efforts from various actors over the years, it still remains an abstract issue, far removed from the day-to-day lives of most people (S. J. O'Neill & Hulme, 2009). Researchers attribute this to the lack of visibility of the causes and the stakeholder indirect experience with the impacts of climate change (Doyle, 2007;O'Neill & Smith, 2014;Wang et al., 2018).
It has been well known that visuals and images strengthen publics' understanding of complex issues, but when it comes to climate change, it is deeply contested. The time lag between cause and effect has made the visual depiction of climate change problematic (Doyle, 2011). Leiserowitz (Leiserowitz, 2006) argued that the lack of "vivid, concrete and personally relevant affective images" make people feel it as a disconnected and far away issue. Until recently, the visual language of climate change has been mostly dominated by graphs and scientific figures (O'Neill & Smith, 2014). While the cumulative trait of climate change poses problems for its visual representation, a considerable array of potential imageries associated with climate change is extensively used across online platforms today (Wang et al., 2018).
Environmental NGOs play a critical role in bridging the communication gap between the scientific community, government officials and the local public on climate change issues (Jeffrey, 2001). Earlier studies on climate change communication by non-governmental organizations (Doyle, 2007) found that the popular iconographies of climate change found today are produced through the cumulative impact of campaigning choices of NGOs. The popularity of digital media has prompted environmental NGOs to employ more visuals to engage the public in social networking sites as visuals are considered central to digital media consumption.
There have been many studies on the visual representation of climate change across various media platforms (Culloty et al., 2018;Lehman et al., 2019;O'Neill & Smith, 2014;Wang et al., 2018) The theoretical perspectives of visual climate change communication are, so far, limited at present. The most widely used framework for climate change communication is frame theory proposed by Entmann (Entman, 1993), but it is mostly used in the analysis of climate change news in printed media. Framing assumes that media coverage and representation influence how people perceive an issue (Culloty et al., 2018). The present study understands how NGOs represent climate change visually on their social media (Instagram) page. To understand the visual framing, the study used the seven principles of visual climate communication proposed by Climate Outreach in their 2015 report on which the research questions are discussed. The seven principles included the portrayal of 'real' people; new climate narratives; the causes of climate change at scale; emotionally powerful climate impacts; climate impacts at local context; problematic visuals of protests and audience (Corner et al., 2015).
---
Methodology
The present study employed visual content analysis to explore the visual representation of climate change on the social media pages of environmental NGOs in India (Metag, 2016). Through the analysis, study aimed to investigate how the visual limitations of climate change have been negotiated by NGOs to communicate the issue on an image centric platform such as Instagram and to examine how much the content aligns with the visual principles for effective climate change communication proposed by Climate Outreach in 2015 report.
To develop the sample frame, the 'Site' search function was used with the key terms "climate change" "NGO" "India" (Site: Instagram.com "climate change" "India" "NGO") across two popular search engines (Google and Yahoo). Out of the 23 Instagram accounts emerged in the initial search results, the researcher purposively selected four NGO Instagram accounts, namely Green Yatra, Greenpeace India, Climate Change India and Climate Front India, that have fulfilled the following criteria: popularity (with more than 500 followers); activity level (a minimum of 100 posts) and #climatechange tagged contents. The most recent thirty Instagram posts as of 20 October 2022, which carried any of the following hashtags: #climatechange or #climatecrisis or #globalwarming or #climateaction from each NGO account were selected for the study. While, the repetitive posts and the posts containing promotions or advertisements related to the organization were excluded from the selection process. Thus, a total of 120 posts were retained for the coding.
---
Coding procedures
Coding was mostly done based on existing codes emerged in literatures on climate change visuals (DiFrancesco & Young, 2011;Doyle, 2011;Lehman et al., 2019;León et al., 2022;O'Neill & Smith, 2014) and other Instagram studies (Cohen et al., 2019).
This study presents the categorization codes and sub-codes utilized for coding visual posts in Table 1. Only the first image of the post series was coded. The posts were analyzed along with the captions and were grouped into four categories-type of imagery used, the subject of the image, its geographic context and its thematic focus (DiFrancesco & Young, 2011). An imagery type is the type of visual component used for the post and is further categorized into four main codes-visual image (photographs/ illustrations/ artwork); text only(Quotes/ data driven/ news/ narrative story); text combined with image and video (Cohen et al., 2019).
Image Subject was coded into human subjects (human/illustrated figure) and non-human subjects. The human subjects were further categorized under certain codesidentifiable/unidentifiable, victims/have agency, or locals/activists (Doyle, 2007;S. O'Neill, 2020;O'Neill & Smith, 2014). If the identity of the portrayed human subject was not well known or mentioned anywhere (in post or captions), it would be coded under 'unidentifiable'.
The non-human subjects were coded into nature (Greenery/ urban or industries/ disaster or pollution); animals (wild /domestic) and others. Coding image subjects are not mutually exclusive. In case the imagery contained more than one visual element, only the most meaningful information would be coded. Image context is the setting shown in the imageries and is coded into local, national and general. The post themes were coded into Causes, Impacts, Solutions and others (DiFrancesco & Young, 2011).
---
Potential limitations and ethical consideration
While Instagram remains as a popular social media platform for NGOs to connect with public on issues such as climate change, relying solely on it may result in an incomplete understanding of the broader NGO landscape and their communication efforts across other important social media platforms such as Facebook and Twitter. Additionally, the selection of NGOs based on popularity and activity level introduces the risk of bias and potentially overlook important contributions from less known organizations. This limitation could impact the generalizability of the findings and may not capture the whole picture of NGOs communication pattern on the issue.
Ethical considerations were addressed by analyzing only the publicly accessible contents on Instagram. However, it is important to note that the study used content shared by NGOs without obtaining any explicit informed consent from individual users. Although efforts were made to adhere ethical guidelines, it might still be possible that individual privacy could be compromised. Future research could address these ethical issues and explore ways to obtain informed consent when studying social media contents.
---
Results and Discussion
---
Types of imagery
Figure 1 illustrates the types of imagery used in the study. Out of the total 120 posts analyzed, approximately 60% of the posts featured 'text combined with images', 20% consisted of visuals only, 11% were texts only, and 9% were in video format. The level of prominence of the imageries varied with different NGOs. For example, Green Peace India used more video contents than the rest. However, the 'text combined with image' remained the most used imageries across all the 4 NGO accounts. Photographs remained predominant followed by illustrations and posters. 38% of the overlaid contents were rated as educational, 25% were of opinions or quotes, 16% were motivational, 13% had the warning contents, 3% were humorous and 5% were for others.
The images accompanying the texts mostly comprised of photographs (69%) and illustrations (31%). Similarly, photographs dominated the visual only content posts (84%). Eleven percent of the posts were text only, containing mostly quotes and data driven information. Videos, in general, were rarely used. Figures 2(a) and 2(b) provide illustrations of the various types of imagery used to depict climate change in India.
---
Image subject
Imageries used by NGOs covered both humans and nonhuman subjects; however, there was a domination of human subjects with almost half of the visuals, as shown ini Figure 3. 32% focused on nature, 11% covered animals and remaining 7% focused on other elements (Ex: food). Out of all the human figures (85.1% real people and 14.8% illustrated figures), locals dominated the sample, followed by activists' group. However, this pattern changed upon analyzing individual accounts separately (Ex: In Climate Front India account, activists were more predominant). Most human subjects shown were unidentifiable without any their description in the posts or captions except few in the Greenpeace India's posts. The presence of celebrities and officials were insignificant in numbers.
The presence of males and females were almost equal. Whereas, the representation of other genders was absent. Most of the posts depicted young and middle-aged humans, followed by children. None of the imagery featured human figures with a visible physical disability. Most of the posts (70.3%) shown humans as having agency while 22.2% were portrayed as victims and 7.4% were portrayed as perpetrators. Of the imagery that contained nature, 50% featured urban environment (industries/polluted), 29.4% greenery and 20.5% Figure 4(a) and 4(b) illustrate the image subjects related to humans. In the Greenpeace India post, the image depicts an 'ordinary man' riding a bicycle, while in the Green Yatra post, humans are portrayed as 'victims'.
---
Image context
Figure 6 presents the research findings pertaining to the context of images used in the study. Of 120 posts, 50 posts (41.6%) carried general contents. 32% features contents were related to India and 25% carried the localized contents specifying villages, cities and states in India. A large portion of the local based contents (64%) were produced solely by Greenpeace India. Whereas, climate change India produced more general and non-context sensitive contents (46%). The examples of image context are illustrated in Figure 7, showcasing the general contents related to climate change in India.
---
Post themes
The research findings related to post themes can be observed in Figure 8. On the other hand, Figure 9(a), 9(b), 9(c), and 9(d) present several examples of post themes used in climate change campaigns in India. The number of posts focusing on solutions (47.5%) was found higher than the causes (24%) and impacts (17.5%). Around 11% of the posts dealt with posters and quotes that did not fit in any of these frames.
The solutions covered diverse topic including sustainable lifestyles, forest and water conservation, wildlife protection, and reviving traditional food culture. Around 35% of the solution posts showed climate activism. The cause frame of the climate change mostly covered the visuals concerning pollution, food wastage, deforestation, and coal usage. Impacts were mostly illustrated through the visuals of natural disasters local/regional National (Indian) general/global (flood/drought), water scarcity, animal sufferings, and heat wave.
---
Discussion
The present study looked into the Instagram account of four environmental NGOs working on climate change issues in the Indian context namely, Green Yatra, Greenpeace India, Climate Change India and Climate Front India. The findings were analyzed to understand how the type of imageries used by the four NGOs traverse the visual complexities of climate change. The usage of climate change imageries by these NGOs was discussed on the basis of seven principles proposed by Climate Outreach in their 2015 report. The seven principles included the portrayal of 'real' people; new climate narratives; the causes of climate change at scale; emotionally powerful climate impacts; climate impacts at local context; and problematic visuals of protests and audience (Corner et al., 2015). These 'climate change visuals principles', grounded in a substantial body of work in visual communication and climate change communication disciplines, are a helpful heuristic for analyzing the main findings of the present study (Wang et al., 2018).
The abstract nature of climate change due to the lack of visual evidence create difficulties in communicating climate change through visuals (Doyle, 2011). Environmental NGOs employ the wide array of imageries such as visuals only, text combined with visuals, text only and video to represent climate change issues in Instagram. Although the study included only NGOs working in India, there was a considerable difference in how each address climate change. Most of the visuals in their Instagram posts are accompanied with texts, reinforcing the limits of visuals alone in representing climate change. A standard approach for visualizing climate change is to use universally recognizable icons such as polar bears, glaciers and smoke stacks (Schroth et al., 2014). However, the findings showed a limited use of such "cliched" iconographies with only a few NGO posts having polar bears and smoke stacks in it. This may be the result of the decade long arguments in climate communication literature (Doyle, 2011;Manzo, 2010;O'Neill & Smith, 2014) around the problematic use of symbolic and iconic photographs in climate change communication. On the other hand, while such images are criticized for "psychologically distant", publics find it as the most 'easy to understand' image of climate change (Lehman et al., 2019). Images such as flood, cracked ground, forest fires, and animal death were the other impact visuals used in the NGO communication in India. Such images capture people's attention and create a sense of importance of climate change (S. J. O'Neill et al., 2013). Flood images have been ranked most important in many studies (Lehman et al., 2019). However, communicators still struggle to understand how such images could empower people to act on climate change. Currently, research (Corner et al., 2015) has found seven principles upon which evidence-based climate change communication can be done effectively.
The presence of human figure is important in climate change imageries. Showing 'real' humans in climate change visuals can be effective in evoking emotions (Corner et al., 2015). Previous literature showed that most climate change visuals portray humans as separated and disconnected from the environment (Doyle, 2011). According to Ockwell (Ockwell et al., 2009), people fail to internalize climate change visuals is in view of the lack of human element in it. The findings of the present study revealed that almost half of the climate change imageries of the NGO posts had at least one human figure in it. Although the ratio varied when considering individual accounts separately. It is also noted that considerable illustrations are also used to portray humans. However, research showed that increasing public engagement is possible only when real people doing real things are represented (O'Neill & Smith, 2014). Such images are considered 'authentic' and can evoke emotions in the public (León et al., 2022). Most humans portrayed by the NGOs in their Instagram pages are ordinary and non-identifiable people. This is in align with previous studies, which argued that identifiable people are less shown on social media platforms compared to traditional media (León et al., 2022). The findings also noted that certain community of people was not given proper coverage, like, humans with visible physical disability in the NGO Instagram posts.
The new narratives of climate change are necessary to draw more attention. Although the 'classic' images of smokestacks, polar bears or deforestation are useful in communication,
---
Cause
---
Impacts
Solution others audiences find them as cynical most of the time (Corner et al., 2015). Images that produce real life stories is an effective attempt to remake the visual representations of climate change in public mind (Corner et al., 2015). It has been noted that there have been considerable attempts from the NGOs in India to include the narratives of people into their climate change posts. This is more evident with Greenpeace India in which they used the quotations of the affected parties within the post over their visuals with the full story given in the captions followed. Such communication attempts are proven to be more effective than historical narratives. But then again, such images are criticized for only evoking feelings but not actions (S. O'Neill, 2020). On the other hand, the personal stories of successful adaptations or mitigation activities were found effective in fostering engagement among 'resistant audience' (León et al., 2022). Humor is another way to give diverse interpretation to climate change; however, only a limited NGO posts under study had humorous contents in it.
For a long time, the visuals of smoking chimneys dominated the cause frame of climate change (Wang et al., 2018). But this has changed with NGO campaigners focus shifted on to changing individual behaviors. Research have shown that general public will not connect their behavior such as driving a car or scooter or eating meat or wasting food with climate change. The causes of climate change therefore need to be shown at large scale (Corner et al., 2015). Majority of the posts related to climate change cause used in the study were either of congested traffic or landfills or smoke chimneys.
Research over the years has repeatedly demonstrated the power of climate impacts visuals in making climate change relevant (Lehman et al., 2019). Climate change impacts visuals started becoming more prominent in 1990s with the images of melting ice, floods, and drought (Wang et al., 2018). Research has shown fear inducing and negative impact photographs, though create sense of urgency of the issue, can be overwhelming (Nicholson-Cole, 2005;Ojala et al., 2021).
The impact frames are found less in the Instagram contents of the NGOs. Their focus was more on climate solutions such as sustainable lifestyles, clean energy, reviving traditional food culture etc. Research indicates that such solution images are more effective when coupled with emotionally powerful impact visuals (Corner et al., 2015). However, no such visual framing was found in the samples. Majority of the impact visuals cover animal sufferings and were not exclusively in the Indian context. People will likely to act when they find the issue being connected with their local context and immediate surroundings (Hulme, 2015). However, emphasizing local contexts-based impacts though effective, may reduce people's concern about wider issues (Hulme, 2015), if not shown the intensity of the situation as such.
Activists and protesters are the other key subjects found in climate change communication. It becomes a common sight to see activists becoming the face of the issue they represent (ex: Greta Thunberg). However, research have shown that such images attracted wide spread pessimism and it will not engage public beyond those who are already involved (O'Neill & Smith, 2014). Protesters and activists occupied majority of the contents in Climate Front India and Climate Change India. Though they are crucial in representing marginalized section in climate change communication and act as a watch dog for government projects (Syahrir, 2021), such images tend reinforce the idea that climate change are for 'them' not 'us' (Corner et al., 2015).
Overall, the contents on the Instagram accounts of each selected NGOs showed variations in framing and communicating climate change visually. Greenpeace India shares contents mostly in align with the visual principles proposed by Climate Outreach. They gave emphasis on sharing local yet relevant social and environmental issues while using photographs of local public. Most of their posts contain the voices of local people as quotes accompanying the visuals. Green Yatra uses illustrations and data to visually represent the issue. Though photographs are used, they are mostly stock photos, with accompanying information/ data rich texts. Climate Change India used visuals that demands urgent call for action. Their visuals mostly cover animals and frame humans as perpetrators. And Climate Front India covers climate activists and protesters in their contents. The visuals mostly include the photographs of protesters holding pluck cards. Thus, the study reveals diverse visual framing of climate change across NGO communication. This open up the need for a more in-depth understanding of climate change visuals used across various social media platforms by various actors. Since the present study only explores the imageries used by NGO for communicating climate change issue, future studies could look in to its impacts and effectiveness on the users, which will be beneficial for planning more audience centric communication strategy.
---
Conclusion
The historical favoring of visuals within environmental discourse pose difficulty for environmental organizations (NGOs) in communicating temporally complex environmental issues such as climate change to skeptical government and disinterested public (Doyle, 2007). But the proliferation of increasingly image centric digital platforms indicates that climate change imageries will be essential for fostering public engagement both in the present and in future (Wang et al., 2018). People understand and perceive issues based on what media represents, now the digital media. The content analysis of climate change related Instagram posts of four NGOs working in India (Greenpeace India, Green Yatra, Climate Change India and Climate Front India), found a diverse use of imageries on the topic despite its problematic visual shortcomings.
The lack of central visual tropes was negotiated with diverse choice of imagery with accompanying texts in the Instagram posts. Around half of the imageries was in the sample feature humans; however, the majority of them were staged photographs as opposed to suggestions outlined by climate outreach in their report. The classic narratives of climate change such as polar bears and melting glaciers were rarely found in the samples. On the other hand, local narratives and stories were more evident especially in Greenpeace India posts. Much of the NGOs' communication efforts was towards changing individual behavior by focusing more on climate change solutions. The Causes and Impacts of climate change were given limited focus by the NGOs. Despite the fact that the NGOs selected for the study were based in India, they showcased great diversity in addressing the issue. Much of the contents carried generalized themes with less reference to Indian and local contexts. Locals and ordinary people were given more emphasis unlike traditional media, which tended to focus on celebrities and politicians. Protesters and activists were seen as the key players in some posts, especially in Climate Front India posts. Though they were crucial in representing marginalized section in climate change communication and acted as a watch dog for government projects (Syahrir, 2021), such images tended reinforce the idea that climate change are for 'them' not 'us' (Corner et al., 2015).
The causes and impacts of climate change were given limited focus by the NGOs. Despite the fact that the NGOs selected for the study are based in India, they showcased great diversity in addressing the issue. Much of the contents carry generalized themes with less reference to Indian and local contexts. The general public were given more emphasis unlike traditional media, which tended to focus on celebrities and politicians. However, it turns out that much of the visuals aligning with the seven principles of climate change communication were from Green Peace India account. This suggests potential variation in communication patterns among the NGOs in climate change and opens up the need to look in to the communication strategies of various climate change communication actors. | 26,063 | 1,113 |
93dc11fb59a8b99cbc9862d5ae43c1b5468dbf12 | The association of household food security, household characteristics and school environment with obesity status among off-reserve First Nations and Métis children and youth in Canada: results from the 2012 Aboriginal Peoples Survey. | 2,017 | [
"JournalArticle",
"Review"
] | Introduction: Indigenous children are twice as likely to be classified as obese and three times as likely to experience household food insecurity when compared with non-Indigenous Canadian children. The purpose of this study was to explore the relationship between food insecurity and weight status among Métis and off-reserve First Nations children and youth across Canada.We obtained data on children and youth aged 6 to 17 years (n = 6900) from the 2012 Aboriginal Peoples Survey. We tested bivariate relationships using Pearson chisquare tests and used nested binary logistic regressions to examine the food insecurity-weight status relationship, after controlling for geography, household and school characteristics and cultural factors. Results: Approximately 22% of Métis and First Nations children and youth were overweight, and 15% were classified as obese. Over 80% of the sample was reported as food secure, 9% experienced low food security and 7% were severely food insecure. Off-reserve Indigenous children and youth from households with very low food security were at higher risk of overweight or obese status; however, this excess risk was not independent of household socioeconomic status, and was reduced by controlling for household income, adjusted for household size. Negative school environment was also a significant predictor of obesity risk, independent of demographic, household and geographic factors.Both food insecurity and obesity were prevalent among the Indigenous groups studied, and our results suggest that a large proportion of children and youth who are food insecure are also overweight or obese. This study reinforces the importance of including social determinants of health, such as income, school environment and geography, in programs or policies targeting child obesity. | Introduction
Indigenous children in Canada (including First Nations, Métis and Inuit) are at a disproportionately higher risk for overweight and obesity compared to their non-Aboriginal Canadian counterparts. 1,2 Defined as the accumulation of excess body fat, obesity is associated with poor health outcomes including compromised immune function, mental health disorders, type 2 diabetes, cardiovascular disease, sleep apnea and decreased quality of life. [3][4][5][6][7] According to the 2009-2011 Canadian Health Measures Survey, approximately one-third of Canadian children and youth between 5 and 17 years of age are classified as overweight (body mass index [BMI] ≥ 25kg/m 2 -< 30kg/m 2 ) or obese (BMI ≥ 30kg/m 2 ), with Indigenous children and youth being twice as likely to be classified as obese in comparison. 4 Corroborating this pattern, the Public Health Agency of Canada reports that 20% of First Nations children living outside of First Nations reserves and 16.9% of Métis children have a BMI ≥ 30, compared to 11.7% of non-Indigenous Canadian children. 2,4 While the etiology of obesity is multifactorial and complex, a social determinants of health framework provides a starting point for unpacking the distal * causes of child obesity, as well as identifying targets for prevention and treatment. 8,9 However, the health disparities experienced by Indigenous peoples highlight the fact that these social determinants are experienced differently by Indigenous populations and must be explored alongside more culturally relevant factors. Several Indigenousspecific social determinants of health models have been developed as a result, including an ecological model by Willows et al. 8 that includes causal factors related to households, schools, communities and the macrosocial context. Greenwood and de Leeuw 9 use a web diagram to demonstrate that there are multiple interrelated relevant social determinants of Aboriginal peoples' health operating at various socioecological levels.
One factor noted in these models that has been gaining increased attention in obesity research is the importance of food security for weight status. Food insecurity is defined as a situation in which availability or access to nutritionally adequate and culturally acceptable food is limited or uncertain. 10,11 While the relationship between food insecurity and obesity may seem paradoxical, research is increasingly linking the two, as food insecurity results in a lack of affordable nutritious food choices, which then may result in obes ity. [12][13][14][15][16] Adults and children have distinct experiences of food insecurity, as children are more vulnerable to resultant behavioural problems, such as decreased school attendance and performance, and poorer overall health and nutrition, despite parents' efforts to minimize food insecurity's impact. 13,17,18 A possible relationship between food insecurity and obesity may be especially relevant for Indigenous children, as Indigenous households are three times more likely to experience food insecurity than non-Indigenous Canadians. 19,20 The 2007/2008 Canadian Community Health Survey found that 20.9% of Indigenous households were food insecure, with 8.4% experiencing "severe" food insecurity. 20 In comparison, 7.2% of non-Indigenous households were food insecure and 2.5% experienced severe food insecurity. 20 Much of this discrepancy can be explained by the higher prevalence of sociodemographic risk factors in Indigenous households (e.g. household crowding, lower household income), 19 many of which have also found to be related to obesity. 21 Previous qualitative research with offreserve Métis and First Nations parents found that food insecurity was perceived by community members to be an important cause of obesity in their communities. 22 In those interviews, food insecurity was thought to be not only a result of low income, but also the high price of fresh food in some locations and a lack of transportation. For some, the loss of traditional food and knowledge about its preparation was also important, leading to poorer diets. 22 However, the association between food insecurity and obesity in Indigenous children has not been quantitatively examined. Moreover, it is important to consider this relationship in the context of other potentially important effects, including house hold characteristics, school-level factors, geography and cultural factors. In this paper, we make use of the 2012 Aboriginal Peoples Survey (APS) 23 to examine the association between household food security status and obesity among off-reserve First Nations and Métis children and youth in Canada, independent of other household, school, geographic and cultural factors.
---
Methods
---
Data and participants
The 2012 APS was a postcensal, national survey of the population aged 6 years and older identified in the 2011 National House hold Survey, 24 and living outside of First Nations reserve communities as well as select Indigenous communities in the North. 21,23 This study focussed on First Nations and Métis children and youth aged 6 to 17 years. Inuit children and youth were excluded, as the geography-driven factors affecting their food security status, as well as their unique BMI profiles and body fat distribution, require independent investigation. 25,26 After excluding the Inuit population and adults aged 18 years and over, the final sample included 6900 individuals. Questions for children aged 6 to 14 years were answered by the "person most knowledgeable" (PMK) about the child, generally a parent or guardian. Youth aged 15 to 17 years were interviewed directly. Details about the sampling, data collection and weighting are available in the APS concepts and methods guide. 23
---
Main variables
---
Obesity status
The dependent variable was weight-status based on BMI categorization using Cole's BMI cut-offs. 27 BMI was calculated using PMK-reported height and weight of children. The APS asked, "How tall is [your child] without shoes on?" and "How much does [your child] weigh?" in order to calculate BMI. 28 Weight status categories included normal, overweight and obese.
---
Food insecurity
The 2012 APS measured household food insecurity over the past 12 months using a series of six statements to which the PMK responded, "often true," "sometimes true" or "never true." The statements captured whether households were able to afford balanced meals, if meals had been downsized or skipped because there was not enough money for food, the frequency of these events, and how often household members experienced hunger. These responses were used by Statistics Canada to categorize households into four levels of food security: high, marginal, low and very low. 28 In the analyses, "highly secure" and "marginally secure" were combined into one category.
---
Covariates
In addition to household food insecurity, covariates included demographic, household, school, geographic and cultural variables previously identified as having potential relationships with food insecurity or obesity.
The demographic variables included were Indigenous identity group (First Nations or Métis), age (6-11 or 12-17 years) and gender (male, female). Household socioeconomic characteristics included annual household income and mother's educational attainment. Household income was divided by the number of household members to provide a "per capita" household measure, which was included as quartiles (less than $9510; $9510-$16 680; $16 690-$27 260; and $27 280 and above). Other household characteristics included family structure (two-parent, lone-parent or other), as well as household crowding, which was measured based on the number of people per room.
The APS included questions about the school environment. Respondents were asked to indicate their level of agreement using a four-point scale (strongly disagree, disagree, agree, strongly agree) with eight statements. Aspects of a positive school environment were captured by asking:
1) "Overall, respondent feels/felt safe at school"; 2) "Overall, respondent is/was happy at school"; 3) "Most children enjoy/enjoyed being at school"; and 4) "The school provides/provided many opportunities to be involved in school activities." Negative aspects of the school environment were captured by agreement with 1) "Racism is/was a problem at school"; 2) "Bullying is/was a problem at school"; 3) "The presence of alcohol is/was a problem at school"; 4) "The presence of drugs is/was a problem at school"; and 5) "Violence is/was a problem at school." For each child, responses to the positive and negative environment questions were averaged, so that higher scores indicate more positive or more negative environments.
Regional and urban/rural geography were also part of the analysis, as research strongly suggests the importance of broader environmental factors.
Lastly, the cultural variables, "exposure to Indigenous language" and "family members' attendance of residential schools," were also included to capture their potential influence on children's weight status. It has been suggested that cultural characteristics such as language retention are important for Indigenous peoples' health in general, and previous research using the 2006 APS has found that parental residential school attendance was predictive of obesity among Métis children. 9,22 Children who were reported to be exposed to an Aboriginal language at home or outside the home were coded as "exposed." The APS asked whether the child's PMK (usually a parent) or the PMK's mother or father (the child's grandparent) had attended Indian residential or industrial schools. Those who did not respond to these questions (17%) were retained as a separate category called "not stated."
---
Statistical analyses
We used Pearson chi-square tests to assess bivariate associations between the independent variables and obesity. Thereafter we used a binary multivariate logistic regression to test the likelihood of children and youth having BMI in the "normal" range, versus being "overweight" or "obese," cond itional on the independent variables that we found to have significant bivariate associations with overweight and obesity. A total of five nested models were fitted, including different groups of predictor variables. We performed our statistical analysis using SAS software version 9.4. 29 We used bootstrap weights provided by Statistics Canada and balanced repeated reestimation (BRR) to adjust variance estimates for the survey's complex sampling design.
---
Results
Table 1 independent of the other variables, but First Nations and Métis children in British Columbia (OR = 0.65, 95% CI: 0.50-0.86) and the three territories (OR = 0.68, 95% CI: 0.49-0.95) were less likely to be overweight or obese, controlling for the other variables in the model. Lastly, Model V included the two cultural variables-exposure to an Indigenous language and family members having attended residential schools. Neither had a significant independent effect on obesity status.
---
Discussion
This study provides additional evidence that Indigenous children and youth are at higher risk of overweight and obesity than are other Canadian children. Among youth aged 12 to 17 years in our study sample, 30% were classified as either overweight or obese, compared with 20.7% of all Canadian youth in 2013. 30 First Nations and Métis girls were less likely to be overweight or obese than were boys, an observation that is consistent with previous literature on weight status and sex/ gender. 16,31,32 Given that Indigenous children and youth are at a higher risk of overweight and obesity and the potential for weight to impact health outcomes over the life course, [3][4][5][6][7] it is important to understand the distal and "upstream" determinants that drive their weight status. The data shown here support the importance and utility of a socioecological perspective for those ends. 8 There has been little exploration of the relationship between food security and weight status among Indigenous children and youth, despite research suggesting its importance for the health of Aboriginal peoples more generally. 33 Research on the relationship between food insecurity and obesity or overweight among children and youth has thus far been inconclusive, as studies have found either a positive association between food insecurity and obesity 15,[34][35][36] or insignificant results. [37][38][39] There are only a few Canadian studies exam ining the food insecurity-obesity relationship. 14,40,41 Overall, this study found that food insecurity is indeed a risk factor for overweight or obesity among Indigenous children, with children in very food-insecure households having significantly higher odds of experienced low food security and 6.8% were severely food insecure. There were significant differences in the percentage of children and youth classified as normal, overweight and obese for all of the covariates examined (Table 1). At the individual level, among those who experienced very low food security, 27.7% were overweight and 19.2% were obese. Age was a critical factor for weight status, as 47.3% of Aboriginal children between the ages of 6 and 11 years were either overweight or obese compared to 30% of youth aged 12 to 17 years. A larger proportion of males fell into the overweight or obese classification (40.3%) compared to females (34.5%). Indigenous identity also had a marginal impact on the likelihood of overweight or obese weight status, as 40% of First Nations children fell into these weight categories, compared with 34% of Métis children. Children and youth who were exposed to an Aboriginal language were more likely to be overweight or obese (40.5%) compared to those who had no exposure (34.5%).
The family-level variables also tell an interesting story. The proportion of overweight or obese children does not largely differ based on mother's educational attainment; 41% of children whose mothers had less than secondary school graduation were overweight or obese, and approximately 35% of children whose mothers obtained a post-secondary certificate, diploma or degree fell into these weight categories. Almost half (44%) of children from the lowest income quartile were overweight or obese. The proportion of children from two-parent families classified as overweight or obese (35.6%) was almost six percentage points less than children from lone-parent families (41.3%), but similar to the proportion of overweight and obesity among children who lived in "other" family structures (i.e. children or youth living alone, with a relative or nonrelative) (35.7%). Of children and youth living in households where there was more than one person per room, 40.0% were classified as overweight or obese compared to 37.2% of children living in households with one or fewer people per room. While 17% of the sample did not respond to the question about a family member attending residential schools, children whose family members had attended residential schools had a higher proportion of overweight or obese status (40.3%) compared to those who did not (36.2%).
The regional and urban/rural geography variables showed that almost 40% of Aboriginal children and youth living in the Atlantic provinces, Quebec and Ontario were either overweight or obese. In small population centres, the proportion of children and youth who were overweight or obese was 42.5%, followed by medium population centres (38.9%), large population centres (35.7%) and rural areas (34.4%).
The bivariate relationships between the school environment variables and overweight were unclear. Children and youth in school environments that were rated the most positive were the most likely to be obese (18.4%), although those in the third quartile were the least likely to be obese (12.8%). Those rating their school environments the least negatively were the most likely to be obese (24.1%), while those with the most negative school environment rating were the least likely (13.0%).
We investigated the adjusted associations between these variables and children's weight status using sequential multivariate logistic regression (Table 2). In Model I, only food security and demographic variables were included, and those with very low food security had higher odds of being obese or overweight (OR = 1.54, 95% CI: 1.11-2.15). In Model II, other household variables were added, and the effect of food security fell below significance. Mother's educational attainment, family structure and crowding had no significant independent effects, but those in the third (OR = 0.76, 95% CI: 0.59-0.97) and fourth (OR = 0.72, 95% CI: 0.55-0.95) income quartiles were significantly less likely to be overweight or obese than those in the first (lowest) quartile.
School environment variables were added in Model III. A positive school environment rating was unrelated to overweight or obesity, while those in the second, third and fourth quartiles of "negative" school environment were more likely to be overweight or obese than those in the first quartile. Those whose school environments were rated the most negatively were the most likely to be overweight or obese, relative to those who rated their school environments the least negatively (OR = 1.43, 95% CI: 1.11-1.84).
Model IV added geographic variables. Rural or urban residence had no effect, household socioeconomic and demographic characteristics. Understanding these results requires further investigation, but it has been suggested elsewhere that schools with negative climates may also be less likely to offer effective opportunities for physical activity. 42 Regional geography appeared to have an impact on weight status, as children and youth living in British Columbia or the three territories were significantly less likely to be overweight or obese compared to children living in Ontario, controlling for household socioeconomic characteristics. Similar variation has been observed previously, and some research suggests that greater emphasis on outdoor physical activity and availability of facilities may be partially responsible for the observed difference in weight status across provinces. 43 In addition, socioeconomic status 44,45 as well as being born outside of Canada 44 has been inversely associated with a lower BMI among adults in several provinces, including British Columbia. Somewhat surprisingly, however, there was no difference between Indigenous children and youth living in rural, small, medium or large cities in their odds of being overweight or obese, suggesting that the more important factors were operating at the household and school levels.
Given previous literature on the determinants of Indigenous peoples' health, we had expected to find that exposure to an Indigenous language, as a measure of cultural preservation, would be protective against being overweight or obese, and that having a family member who attended residential schools would be a risk factor. Although neither had an independent effect, it must be recognized that these measures included in the APS are only weak measures of cultural attachment or preservation. Further research is necessary to understand whether cultural factors might be related to overweight and obesity at the population level, and if so, in what way.
---
Strengths and limitations
No other studies to date have examined the relationship between food insecurity and obesity among Aboriginal children and youth at the population level. This study used a national survey with the largest available sample size of Indigenous children and youth.
A key limitation of this study, as well as many others investigating the food insecurity-obesity relationship, is that the design is cross-sectional and does not allow us to establish causation or explore how the relationship changes over time. Subjective BMI data were collected, as caregivers were asked to report their children's height and weight. This may have resulted in an underestimate of the prevalence of obesity, as research shows that parents tend to underestimate their children's weight and overestimate height, leading to a lower BMI than when objectively measured. 45,46 Covariates not measured in this study, such as physical activity and diet, could be responsible for confounding effects. Additionally, given that this is not a well-studied topic, we were not able to compare this association in Aboriginal children and youth with any similar associations in the general Canadian population.
It is also difficult to compare our results with other studies, because different measures are used to assess food insecurity. The United States uses the Agricultural Department Food Security Scale, 47 which is different from the measures used in the APS or the Canadian Community Health Survey, limiting comparisons. Moreover, while the literature discusses the importance of including culture and access to traditional foods for an Aboriginal definition of food security, 8,9 the APS food security questions do not include these dimensions.
---
Conclusion
We concluded that off-reserve Indigenous children and youth who are in households with very low food security are indeed at higher risk for overweight and obesity, but that this excess risk is not independent of household socioeconomic status; household income adjusted for household size are reliable predictors. This suggests that household socioeconomic status is a major contributor to the high risk of overweight and obesity among First Nations and Métis children and youth. We also found that being in a negative school environment is associated with obesity risk, independent of demographic, household and geographic factors.
Given the complexity of childhood obesity and overweight, the available data limited our ability to identify conclusively the factors that are most important, including the potential role of food insecurity. There is a lack of longitudinal data to help us understand the interplay of various factors over the life course in different populations. Among Indigenous peoples specifically, community-based participatory research and research using qualitative methods would strongly complement quantitative investigations. Previous research on interventions in Aboriginal communities demonstrates the strength of such an approach. 33,41,42
---
Conflicts of interest
The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper.
---
Authors' contributions
JB conceived the idea for the paper, conducted the literature review and preliminary data analysis, and wrote the first draft. MC assisted with the data analysis and manuscript draft, revised the paper and is principal investigator (PI) on the supporting grant. YG conducted the data analysis, and revised and commented on later drafts. PW supervised the data analysis and is co-PI on the supporting grant. All authors read and approved the final manuscript. | 23,041 | 1,813 |
d84f7f4e7c15612fa5d14ab3df9970064c533efb | Sustainability Assessment for the Protected Area Tourism System from the Perspective of Ecological-Economic-Social Coordinated Development | 2,023 | [
"JournalArticle"
] | Tourism is a significant way for the public to enjoy the cultural ecosystem services provided by protected areas (PAs). However, with PAs being expected to make much wider ecological, social and economic contributions to sustainability and human well-being, PA managers face challenges in coordinating tourism with other goals, such as ecological conservation and local community development. To address this challenge, we developed a sustainability assessment framework that considers the PA, local community, and tourism as a complex system comprising social, economic, and ecological subsystems from the perspective of subsystem relationships. The coupling coordination degree model and the obstacle degree model were applied to assess sustainability of the tourism system in Qinghai Lake Nature Reserve of China. The assessment results indicate that the sustainability index fluctuated between 2010 and 2019, but generally exhibited an upward trend, undergoing three stages and reaching the stage in 2019 where ecological sustainability took the lead. At this stage, the coupling coordination degree between the economy and society subsystems was at its lowest, and the economic subsystem faced the highest obstacle degree. The study demonstrates that involving scholars and administrators in the index selection process and considering both index information and management concerns when determining index weight makes the coupling coordination degree model more suitable for PA tourism systems. The assessment method developed in this study effectively reflects the temporal evolution of PA tourism system sustainability and provides valuable implications for coordinated ecological-economic-social management by analyzing obstacle factors. | Introduction
Protected areas (PAs) provide a most important and effective way to protect global biodiversity and ecological environments and contribute to human health and well-being [1][2][3]. Declining financial support for PAs in developing countries and even in some developed ones, such as Australia, the US and Canada, suggests that developing PAs by solely relying on government inputs is unsustainable [4]. Nature-based tourism is a popular type of cultural ecosystem service that can enhance the emotional connection between human beings and nature and contribute to the financial sustainability of PAs [5][6][7][8]. It is estimated that the annual tourist arrivals at the world's PAs reach 8 billion [9], and that the economic value of PAs derived from the improved mental health of visitors is US$6 trillion a year [10]. The wider benefits of park visits have not been quantified [11]. However, with PAs being expected to make much wider ecological, social and economic contributions to sustainability and human well-being, PA managers face challenges in coordinating tourism with other goals, such as nature conservation and local community development [4,12,13]. Therefore, both the International Union for Conservation of Nature (IUCN) and World Tourism Organization (WTO) emphasize the importance of sustainability assessment and adaptive management of tourism in PAs, so as to bring into full play the role of tourism in poverty reduction, community development and biodiversity conservation [14,15].
PAs, as important nature-based tourism destinations, are complex adaptive systems that involve multiple stakeholders and are affected by social, economic and environmental factors [16][17][18][19][20]. Increasingly, the PA, the local community and the tourism within the PA are being recognized a complex system [21,22]. The significant impact of COVID-19 on PA tourism highlights the complex interdependencies among tourism, local communities and PAs, and such interdependencies should not be overlooked when seeking to improve PA sustainability [23,24]. A systematic way of thinking is therefore proposed to understand the interaction of key elements, the evolution of systems, and the assessment and management of PAs and local tourism [18,25]. Furthermore, Zhang et al. (2022) indicate that the interrelationships between subsystems provide an important and effective perspective for sustainability assessment of the PA tourism system [26]. Plummer and Fennell (2009) argued that sustainable tourism management in PAs should anticipate system dynamics and transformative changes [27]. However, traditional assessment methods tend to use sustainability indicators targeting current conditions and poor selection of the indicators often leads to the misidentification and misinterpretation of the changes over time. Research on systematic thinking suggested that future conditions may include more extreme and rapid changes than previously [21]. In addition, although previous studies have proposed many indicators on the sustainability of tourism in PAs, they have paid less attention to the coordination among subsystems [26]. Therefore, new methods acknowledging uncertainties, changes and frequent interactions of the subsystems are required.
The coupling coordination degree model (CCDD) is an effective method used to evaluate the consistency and positive interaction among systems, and can reflect the trend of complex systems transforming from disorder to order [28]. In recent years, it has been extensively used in studies on the relationship between tourism and other systems and among components of the tourism system, such as tourism and environment [29][30][31], and the social-ecological status of island tourism destinations [32]. These studies reveal the importance and applicability of a coupled coordination perspective in measuring complex tourism systems. But they mostly focus on city (prefectural), provincial and national scales, and smaller scale studies represented by PAs are limited. In addition, since the current indicators for the CCDD are generally selected by authors [29,30,33], with other experts or stakeholders seldom engaged, the availability of comparable data and information and the objectivity of the assessment results are inevitably undermined. As the WTO (2004) noted, a participatory process can be productive, especially when key stakeholders and potential data providers are involved [14].
The Tibetan Plateau is widely recognized for its abundant biodiversity and diverse ecosystems, which are intricately linked to the livelihoods of over one billion people [34]. To safeguard the rich flora and fauna in this region, numerous PAs have been established, many of which are renowned tourist destinations [35,36]. The Qinghai Lake Nature Reserve (QLNR) is a typical example of these PAs. Given the vulnerable ecology and underdeveloped economy of the Tibetan Plateau, tourism in these PAs is expected to play a greater role in poverty reduction, community prosperity and biodiversity conservation. Thus, coordinated ecological-economic-social development is not only essential for the sustainability of each PA but also of paramount importance for realizing the United Nations Sustainable Development Goals and, specifically, for promoting green development on the Tibetan Plateau [37].
Due to the gaps identified in both practical management and theoretical assessment methods in PA tourism, this study aims to enhance the sustainability of PA tourism by focusing on subsystem relationships using the CCDD. To achieve this goal, three sub-aims have been identified: (1) improving the applicability of CCDD to the PA tourism system and enhancing the objectivity of indicator selection, (2) evaluating subsystem relationships and their changes in the PA tourism system, and (3) identifying obstacles to the sustainable development of PA tourism.
---
Materials and Methods
---
The Assessment Framework
A growing body of research conceptualizes tourism as a complex adaptive system [21,38,39] or calls for systematic thinking in the conceptualization of the relationships among tourism, PAs, and the local communities [40,41]. Stone et al. (2021) believed that without clear identification of interacting variables, any study on PAs and tourism will reveal an incomplete and potentially confusing picture, as the complex interactions between system components will not be apparent [42]. Schianetz and Kavanagh (2008) also pointed out that systematic thinking is critical for assessing the sustainability of natural tourist destinations located in eco-environmentally fragile areas [25].
Sustainability indicators can provide managers with required information and are essential for improving tourism planning and management and promoting sustainable development [14,43]. Scholars have developed a series of indicators from one or more dimensions of ecology, economy and society to evaluate the sustainability of different scales and different types of tourism destinations [44][45][46][47]. However, more attention is paid to the sustainability of the ecological, social and economic dimensions themselves, and less to the relationship among the three [26]. In practical terms, the three dimensions are "pillars" of sustainable development with frequent interaction, and a balance must be struck between them [43]. For Bramwell and Lane (2011), the "balance" of economic, social and environmental sustainability is the cornerstone of sustainable tourism policies [48]. Systematic thinking makes it possible to analyze the relationship among the three dimensions.
We define the PA, local community and tourism within the PA as a complex adaptive system composed of the three subsystems of society, economy and ecology. The economic subsystem mainly includes tourism-related economic factors within and around the PA, such as tourism revenue and tourist arrivals. The social subsystem mainly encompasses social and cultural factors within the PA and the adjacent communities, such as community participation, cultural preservation, and environmental education. The ecological subsystem mainly consists of natural elements within and around the PA, such as environmental quality and biodiversity conservation. The three subsystems frequently interact with one another through the flow of capital, information, and tourists, among other factors, and this encourages the system to evolve. In order to assess the PA tourism from the perspective of ecological-economic-social coordinated development, this study calculates the coupling coordination degree among the subsystems based on sustainability evaluation (Figure 1). The evaluation covers two parts. The first is the sustainability of the subsystems, which includes the sustainability of the social, economic, and ecological subsystems. The social sustainability index, economic sustainability index, and ecological sustainability index are the three names given to the evaluation outcomes, accordingly. The second part of the evaluation concerns the coupling coordination degree among the subsystems, including the comprehensive coupling coordination degree of the three subsystems, and the coupling coordination degree between each pair of subsystems.
---
Study Area
QLNR is located in the Qinghai Province, northwest of China (Figure 2). Qinghai Lake, the most important tourist attraction of the PA, is the largest saline lake of China and a wellknown tourist destination on the Tibetan Plateau. Furthermore, many people have lived by the lake for generations. Nature conservation, community prosperity, and sustainable tourism are three inseparable management objectives for QLNR [49]. The following are the reasons why we chose this reserve as a case study to assess the relationship between economic, social, and ecological subsystems of the PA tourism system.
---
Study Area
QLNR is located in the Qinghai Province, northwest of China (Figure 2). Qinghai Lake, the most important tourist attraction of the PA, is the largest saline lake of China and a well-known tourist destination on the Tibetan Plateau. Furthermore, many people have lived by the lake for generations. Nature conservation, community prosperity, and sustainable tourism are three inseparable management objectives for QLNR [49]. The following are the reasons why we chose this reserve as a case study to assess the relationship between economic, social, and ecological subsystems of the PA tourism system.
---
Study Area
QLNR is located in the Qinghai Province, northwest of China (Figure 2). Qinghai Lake, the most important tourist attraction of the PA, is the largest saline lake of China and a well-known tourist destination on the Tibetan Plateau. Furthermore, many people have lived by the lake for generations. Nature conservation, community prosperity, and sustainable tourism are three inseparable management objectives for QLNR [49]. The following are the reasons why we chose this reserve as a case study to assess the relationship between economic, social, and ecological subsystems of the PA tourism system. First, Qinghai Lake is an important node of two international bird migration channels in East Asia and Central Asia and also the only habitat of the Przewalski's gazelle [50]. It was recognized as a "Wetland of International Importance" by the Ramsar Convention in 1992. However, the monitoring results of 25 largest lakes in the world from 2008 to 2010 by United Nations Environment Programme (UNEP) showed that the load of human activities in Qinghai Lake had reached 90% [51].
Second, Qinghai Lake has been a popular nature-based tourist destination since the 1980s. In 2019, it received 4.43 million tourists. According to estimates by Zhao (2018) [52], the per capita ecological deficit relating to tourists in Qinghai Lake showed an overall rising trend, and tourist overload was common from 2001 to 2015. In 2017, in response to the central government of China's environmental inspection, which aimed to supervise and enforce local-level environmental protection policies, several scenic spots were closed, and numerous tourist facilities, including tents and bed and breakfasts, were demolished within and around the QLNR for non-compliance with PA management regulations. As a result, the duration of visitor stays within or around the reserve decreased, and the number of overnight stays also declined.
Third, similar to most Chinese PAs, QLNR is home to many local people whose livelihoods and lives are closely linked to the reserve and its tourism development. There are 11 towns around the reserve with 5870 residents and 76.55 km 2 farmlands in the reserve. The establishment of the PA restricted local residents' activities such as grazing and planting and others that depend on natural resources. In order to supplement their income, many local residents have resorted to selling tourist souvenirs and taking on part-time jobs in nearby hotels and restaurants. Some individuals have even illegally opened access routes to the reserve and established small tourist attractions, offering paid services such as canola flower sightseeing, horse riding, and photography. However, managing these community residents poses significant challenges, and sometimes conflicts arise between the community and the authorities. The potential impacts of these changes in livelihoods on community resilience remain unclear.
---
Index System
Establishing the index system proceeded via the two key steps of selecting indicators and determining their weights. The process can be seen in the flow chart for index system establishment (Figure 3).
---
Selection of Indicators
This paper adopted the fuzzy Delphi method (FDM) to select indicators. The Delphi method is commonly used to select sustainability indicators, but its uncertainty, vagueness and subjectivity need be addressed [53]. The FDM applies fuzzy set theory to the Delphi method, which overcomes the shortcomings by reducing the number of questionnaire surveys, avoiding the distortion of individual expert opinions, and considering the fuzziness of the interview process [54]. The FDM with a dual-trigonometric fuzzy function especially uses a trigonometric fuzzy function and the grey zone testing method to integrate expert opinions, which is more objective than the calculation of geometric means [55]. Hence, this paper adopted the FDM with a dual-trigonometric fuzzy function to select sustainability indicators, and the process was as follows.
---
1.
Step 1: Making a list of candidate indicators Following the principles of practicality, comparability, objectivity and data availability, 28 candidate indicators were generated by referring to the current literature [14,54,[56][57][58][59][60][61] and conducting semi-structured interviews with tourism stakeholders in QLNR (administrators, community residents and tourists).
---
2.
Step 2: Establishment of the fuzzy Delphi expert group and questionnaire survey
The key to the Delphi method lies in the expertise of experts assigned and their familiarity with the subject matter, rather than the number of experts [53]. Saaty and Özdemir (2014) held that adding more experts who are less experienced may disturb the judgments of other experts and even lead to false conclusions [62]. Accordingly, 15 administrators and researchers familiar with tourism in PAs and having at least five years' professional experience in the related Forests 2023, 14, 890 6 of 22 sectors were invited to fill in the expert questionnaire from June to July, 2020. Eliminating invalid questionnaires with obvious missing answers or no discrimination of scores (e.g., 10 for all the maximum values and 0 for all the minimum values), eight valid responses were considered. As revealed in Table 1, the experts were equally distributed among researchers on PA tourism (3), administrators of tourism in QLNR (2) and administrators for PA tourism at provincial or national level (3), and were thus representative. This paper adopted the fuzzy Delphi method (FDM) to select indicators. The Delp method is commonly used to select sustainability indicators, but its uncertainty, vagu ness and subjectivity need be addressed [53]. The FDM applies fuzzy set theory to t Delphi method, which overcomes the shortcomings by reducing the number of questio naire surveys, avoiding the distortion of individual expert opinions, and considering t fuzziness of the interview process [54]. The FDM with a dual-trigonometric fuzzy functi especially uses a trigonometric fuzzy function and the grey zone testing method to in grate expert opinions, which is more objective than the calculation of geometric mea [55]. Hence, this paper adopted the FDM with a dual-trigonometric fuzzy function to lect sustainability indicators, and the process was as follows. Note: The same group of experts was consulted in the analytic hierarchy process (AHP).
Forests 2023, 14, 890 7 of 22
---
3.
Step 3: Index selection After two rounds of fuzzy Delphi questionnaire surveys, 21 indicators were generated in total (Table 2). The questionnaire and its data analysis process can be seen in Appendices A and B. Though no academic consensus on the number of sustainability indicators has been reached, the WTO (2004) pointed out after the summarization of global practice that 12-24 indicators are optimal, as an excessively large number of indicators may drive up the cost of data acquisition and be difficult to use, while use of only a few indicators tends to overlook economic, ecological or social issues. By this standard, the number of indicators in this paper is suitable [14]. Note: M i -Z i < 0 requires a second round of expert consultation (values in bold), and G i < S i means the indicator should be deleted.
---
Calculation of Weights
Index data need to be standardized before weights are calculated. Formulas (1) and (2) were used to standardize the original data for a positive index
x ij = x ij -min 1≤j≤n x ij max 1≤j≤n x ij -min 1≤j≤n x ij(1)
and for a negative index
x ij = max 1≤j≤n x ij -x ij max 1≤j≤n x ij -min 1≤j≤n x ij(2)
where, x ij and x ij , respectively, refer to the original value and the standardized value of indicator j in year i; max 1≤j≤n
x ij and min 1≤j≤n
x ij are the maximum and minimum value of indicator j among all years (2010-2019). An x ij whose standardized result is 0 is replaced by 0.0001 to avoid null value in the subsequent calculation with the entropy method (EM).
The analytic hierarchy process (AHP) is a common method to obtain the weight of sustainability indicators in the form of hierarchical data combined with experts' opinions [53,54]. It provides a way to systematize the complex issues of PA tourism with the advantage of being easy to operate and accommodating the views of different stakeholders [63]. This study used AHP to divide the indicator system into three hierarchical levels (Table 3), established the pairwise comparison matrix for each level, and invited the experts to compare each level of indicators pairwise on a scale of 1 to 9. Saaty and Özdemir (2014) found that in the use of AHP, engagement of no more than 7 or 8 experts is more likely to make for effective and consistent judgments [62]. The eight experts in Table 1 were therefore invited to participate, and seven of them eventually completed the expert questionnaire. Yaahp was used to process the AHP questionnaire data, using calculation and consistency checks to obtain the indicator weight w j .
The EM is commonly used to objectively calculate weights. Entropy is a measure of the uncertainty of indicator information. If the amount of information is higher, the uncertainty is lower and the entropy is smaller; if the amount of information is lower, the uncertainty is higher and the entropy is larger. Tang (2015) stated that the EM can avoid bias caused by subjective influence to a certain extent when determining the index weights by analyzing correlation degree and information among indexes [29]. The formulas are shown from (3) to (5).
y ij = x ij ∑ m i=1 x ij(3)
d j = 1 + 1 lnm m ∑ i=1 y ij lny ij(4)
w j = d j ∑ m i=1 d j(5)
In order to reduce the subjectivity of the AHP weight and make the assessment results more reliable, Formula (6) was used in combination of the EM and AHP weight to get the general weight w j . The results are shown in Table 3.
w j = W j + W j 2(6)
Forests 2023, 14, 890 The indicator data on the ecology subsystem and the economy subsystem for the sustainability assessment were sourced from the Qinghai Lake Protection and Utilization Administration of Qinghai Province, mainly including the Monitoring Report on the National Nature Reserve of Qinghai Lake (2010-2019) and statistics on number of tourists and tourism income over the years. Data on social subsystem indicators and some of the local economic development indicators were obtained from China Statistical Yearbook (County-level) and China Statistical Yearbook (Township) from 2010-2019.
---
Coupling Coordination degree Model
Suppose x 1 , x 2 , x 3 • • • x n are the indicators of the economy subsystem and x is the corresponding standardized value of x , then the economic sustainability index is f 1 (x) = ∑ n i=1 w i x i . w i represents the weight of indicator i in the economy subsystem. Similarly, the social sustainability index and the ecological sustainability index are f 2 (x) and f 3 (x), respectively.
The coupling coordination degree among the subsystems was calculated using formulas (7) to (9).
C = n f 1 (x) × f 2 (x) • • • f n (x)/(f 1 (x) + f 2 (x) • • • f n (x)) n (7) T = γ 1 f 1 (x) + γ 2 f 2 (x) • • • γ n f n (x)(8)
D = √ C × T(9)
where C represents the coupling degree, D represents the coupling coordination degree, γ is the weight coefficient of the corresponding subsystem, n is the number of subsystems. In the case of n = 3, T stands for the comprehensive sustainability index of the PA tourism system. By referring to the existing body of research [33,64,65], this paper defines the gradation criteria of the coupling degree and the coupling coordination degree, as shown in Table 4. We used the obstacle degree model to identify obstacle factors of the tourism system in QLNR. The formulas are as follows [66]:
I ij = 1 -x ij(10)
O j = F j I ij ∑ n j=1 F j I ij(11)
Forests 2023, 14, 890
12 of 22 Q j = ∑ O j (12)
where x' ij is the standardized value of indicator j in year i, I ij represents the deviation degree of indicator j, F j is the contribution degree of indicator j, which can be expressed by index weight, O j represents the obstacle degree of indicator j, Q j represents the obstacle degree of a subsystem.
---
Results and Discussion
---
Indicators and Weights
As shown in Table 3, the indicator system aligns with the sustainable tourism management principles for PAs put forward by IUCN, including indicators on nature conservation, communities' right to development and cultural authenticity, continuous and fair development of the tourism economy and provision of valuable recreational experience [15]. These principles also echo the functional orientation of China's PA system, which aims to protect nature, provide high-quality ecological products, and maintain harmonious coexistence between humans and nature for sustainable development [67]. Specifically, in the economy subsystem, A 1 and A 2 have the same weight, indicating that both economic growth and economic efficiency are critical for economic development. In the society subsystem, nature education is the most important, with the sum of the weights of the three indicators, namely environmental interpretation facility (B 31 ), environmental interpreters (B 32 ) and capital input on nature education (B 33 ), accounting for 65.50% of the whole subsystem. This reflects the importance of nature education for tourism in PAs in serving social functions. In the ecology subsystem, C 2 exerts the greatest influence, accounting for 66.35% of the entire subsystem. More specifically, protection of key species (C 22 ) was given the highest weight with AHP, occupying 59.10% of the ecology subsystem. Thus, the biodiversity conservation represented by key species is the most important factor for the ecology subsystem.
There is little difference in the weights of the three subsystems. The result that the ecology subsystem has the highest weight is consistent with the study by Yu (2006) in Tianmu Mountain Nature Reserve, which observed the principle of ecological conservation coming first [57]. What is different is that in the present study, the society subsystem carries more weight than the economy subsystem. Given the management objectives of promoting local development and ecological and cultural protection for PAs, we believe it rational to pay greater attention to social and cultural factors of Chinese PAs for two reasons. First, as many communities live in and around PAs in China, it is critical for sustainable tourism management in PAs to reduce conflicts between PAs and communities and win over the community support [68,69]. Second, unlike the western world's immersion in wilderness aesthetics, Chinese tourists uphold the traditional culture that the human is an integral part of nature and prefer landscapes with man and nature coexisting in harmony [70]. Cultural factors constitute one of the great appeals of tourism in PAs.
According to the analysis methods and their computational formulas in this study, it is evident that the weighting of indicators not only directly affects the sustainability index, but also influences the results of coupling coordination degree and obstacle degree calculations. Therefore, the method chosen for determining indicator weights is of great importance. As indicated in Table 3, it is observed that the weights of certain indicators differ significantly when obtained using the AHP compared to the EM. Some indicators are regarded as important by experts and thus heavily weighted, but offer limited information, such as A 21 , C 21 and C 22 . For these indicators, weighting with the EM alone will not be able to reflect the importance of the indicators in practice. In contrast, some other indicators, such as B 31 , B 32 and B 33 , which showed rapid changes in the study period, will be neglected if only weighted with the AHP. Therefore, it is appropriate and necessary to combine both methods in an indicator system reflecting the temporal changes.
---
Coupling Coordination Degree and the System Evolution
---
Sustainability Index
As shown in Figure 4, the sustainability index of QLNR tourism system and its subsystems fluctuated in 2010-2019, but generally trended upwards. The social sustainability index was at its lowest level in the three subsystems between 2010 and 2013, but has since maintained a steady upward trend overall since 2014. After 2017, it began to surpass the economic sustainability index. The ecological sustainability index exhibited fluctuations during the period between 2010 and 2016, but experienced a rapid increase after 2017, reaching a 10-year peak in 2019. The economic sustainability index continued to fluctuate over the decade and approached its lowest level in 2017. The gap in the sustainability index between the economic subsystem and the ecological subsystem widened further and further after 2017.
garded as important by experts and thus heavily weighted, but offer limited information, such as A21, C21 and C22. For these indicators, weighting with the EM alone will not be able to reflect the importance of the indicators in practice. In contrast, some other indicators, such as B31, B32 and B33, which showed rapid changes in the study period, will be neglected if only weighted with the AHP. Therefore, it is appropriate and necessary to combine both methods in an indicator system reflecting the temporal changes.
---
Coupling Coordination Degree and the System Evolution
---
Sustainability Index
As shown in Figure 4, the sustainability index of QLNR tourism system and its subsystems fluctuated in 2010-2019, but generally trended upwards. The social sustainability index was at its lowest level in the three subsystems between 2010 and 2013, but has since maintained a steady upward trend overall since 2014. After 2017, it began to surpass the economic sustainability index. The ecological sustainability index exhibited fluctuations during the period between 2010 and 2016, but experienced a rapid increase after 2017, reaching a 10-year peak in 2019. The economic sustainability index continued to fluctuate over the decade and approached its lowest level in 2017. The gap in the sustainability index between the economic subsystem and the ecological subsystem widened further and further after 2017.
---
Coupling Degree
As revealed in Table 5, from 2010 to 2019, the comprehensive coupling degree among the three subsystems and the coupling degree between each pair of subsystems was averaged at 0.8 to 1.0, a "superiorly high" coupling level. It means the three subsystems were closely connected and frequently interacted with each other.
---
Coupling Coordination Degree
According to Figure 5, from 2010 to 2019, the comprehensive coupling coordination degree among the three subsystems and the coupling coordination degree between each pair of subsystems showed an overall upward trend, but the coordination level remained unbalanced until 2019. Only the coupling coordination degree between the society subsystem and ecology subsystem reached the "barely balanced" level in 2019, the highest score in a decade. Specifically, the coupling coordination degree between the ecological subsystem and the social subsystem remained at the lowest level before 2016. However, it rapidly increased thereafter and reached the best-coordinated level among the four groups. On the other hand, the coupling coordination degree between the economic and social subsystems significantly decreased after 2016, becoming the worst-coordinated level among them.
---
Stages of the System Evolution
Combination of the evaluation results of the subsystem sustainability index and the coupling coordination degree shows that the tourism system in QLNR evolved across three stages (Table 6). During the first stage (2010-2014), the economy subsystem was leading in development, whereas the society subsystem lagged behind. The relationships between the three subsystems were "moderately unbalanced" in general, with the coupling coordination degree between the society and ecology subsystems being the lowest. During the second stage (2015-2017), the society subsystem took the lead in development, while the ecology subsystem lagged behind. The coupling coordination degree among three subsystems was at the "slightly unbalanced" level, and the coupling coordination degree between the economy and the society subsystems was relatively higher. During the third stage (2018-2019), the ecological sustainability index rose rapidly, while the economic sustainability index declined. The coupling coordination degree between the society and the ecology subsystems was relatively higher, while that between the economy and the society subsystems was the poorest at this stage. Consequently, it is now urgent to improve the development level and efficiency of the economy subsystem and enhance the coupling coordination degree between the economy and the society and ecology subsystems. Combination of the evaluation results of the subsystem sustainability index and the coupling coordination degree shows that the tourism system in QLNR evolved across three stages (Table 6). During the first stage (2010-2014), the economy subsystem was leading in development, whereas the society subsystem lagged behind. The relationships between the three subsystems were "moderately unbalanced" in general, with the coupling
Rankings of subsystem sustainability index f 1 (x) > f 3 (x) > f 2 (x) f 2 (x) > f 1 (x) > f 3 (x) f 3 (x) > f 2 (x) > f 1 (x)
Rankings of coupling coordination degree between subsystems Note: D 12 refers to the coupling coordination degree of economic subsystem and social subsystem. D 13 refers to the coupling coordination degree of economic subsystem and ecological subsystem. D 23 refers to the coupling coordination degree of social subsystem and ecological subsystem.
---
Obstacle Factors for Sustainable Development and Management Implications
The obstacle model can help us identify the obstacle factors for the sustainable development of the system [71]. In order to promote coordinated development among subsystems, we conducted an analysis of the obstacle degree for each subsystem and identified the factors that caused them. Table 7 lists the obstacle degree values and the top three obstacle factors for each subsystem from 2010 to 2019. The social subsystem had the highest obstacle degree during 2010-2013, followed by the ecological subsystem during 2014-2018, and the economic subsystem in 2019. This is roughly consistent in time with the three stages that QLNR tourism system has gone through and explains the main obstacle factors to the system development in each stage. Specifically, over the decade, the most common obstacle factors in the social and ecological subsystems were the three natural education-related indicators (B 33 , B 32 , B 31 ), and the wetland area (C 11 ), vegetation coverage area (C 12 ) and key species protection (C 22 ), respectively. In contrast, obstacle factors in the economic subsystem were more dispersed, with the most common indicators being tourism revenue structure (A 11 ) and growth in tourist numbers (A 23 ). In 2019, the economic subsystem posed the greatest obstacle to the sustainable development of the QLNR tourism system. The top three obstacle indicators for achieving sustainable economic development were identified as the per capita tourist consumption level (A 12 ), local economic growth (A 21 ), and the spatial distribution of tourism income (A 14 ). As revealed by the assessment results, the tourism development in QLNR was in the leading stage of ecological sustainability. However, the coupling coordination degree between the economy and the society subsystems was the lowest, and the economic subsystem had the highest obstacle degree in 2019. Therefore, it is critical to improve the economy development efficiency and enhance the coupling coordination degree between economy and the other two subsystems for sustainability of the whole system. Upon investigation, the decline in sustainability of the economic subsystem has been attributed to two significant events: the environmental inspection by central government in 2017 and the COVID-19 pandemic since 2019. The former led to a reduction in tourist attractions and reception facilities in and around the QLNR, resulting a change from a tourist destination to a transit point. The latter has caused a sharp decrease in tourists from outside Qinghai Province and a low motivation for tourism consumption within the province. With the aim of promoting the coordinated development of the economy, society, and ecology in Qinghai Lake Nature Reserve as a tourist destination, and based on the assessment of subsystem relationships and identification of obstacle factors, the following management insights can be derived.
Socially, community participation in tourism needs to be strengthened. On the one hand, local communities can engage in farming or herding on a flexible schedule when the PA tourism is suspended, which can alleviate the tourism operation pressure on hiring full-time staff under unexpected situations such as epidemics. On the other hand, local communities can gain knowledge, ability and income through participation that contributes to the goal of PA to promote community prosperity. Meanwhile, as livelihoods become less dependent on natural resource extraction and income increases, conflicts between communities and PA managers are expected to decrease. In addition, as one of the important functions of PAs, nature education, especially in terms of facilities (B 31 ), personnel (B 32 ) and input (B 33 ), requires more attention to enhance ecological awareness among the population and foster emotional connections between people and nature, in order to gain public support for PA efforts.
Ecologically, close attention should be paid to changes in some ecological indicators to reveal the influencing mechanisms of tourism. Restrictions on travel during the pandemic have created an opportunity for natural environmental restoration and less artificial interference with biodiversity [72]. Administrators and researchers can make use of the period to identify the ecological indicators that are most responsive to the weakened disturbance from tourism, such as animals, plants, and water (C 12 , C 13 , C 13 ). It is recommended to optimize tourism project planning by considering both the time of opening and spatial layout in light of the tourism-influenced mechanisms. This aims to identify the most favorable time and location for visits in order to mitigate the negative environmental impacts to key protection objects, such as waterbirds, Przewalski's gazelle, and plants.
Diversified environmentally-friendly tourism projects are suggested to improve the tourism economy efficiency on the premise of ecological conservation. For instance, longdistance birdwatching and sightseeing by bicycle as well as nature education and the Tibetan cultural experience can be designed to prolong the stay of tourists, increase per capita spending (A 12 ), and drive up tourism income from diversified sources. Furthermore, providing information about accommodations in nearby towns or partnering with local lodging services can also increase the overnight stay rate of tourists, thus contributing to the economic benefits of tourism. For the PA tourism development, it is crucial to rely on peripheral areas of the PA to provide accommodation, dining, and other services as much as possible, in order to minimize the impact of tourism on the PA's environment and biodiversity, while also promoting the development of local communities.
---
Conclusions
The sustainable tourism development in PAs is a complex process, in which economic, social and ecological factors interact with each other and in which resource administrators, tourists and local communities, among other stakeholders, participate. Systematic thinking has offered us a holistic perspective of analysis. From the case of the QLNR tourism system, it is evident that changes in external factors such as policies can significantly improve the sustainability of one subsystem while potentially reducing the sustainability of another subsystem. Hence, the assessment of relationships among subsystems should not be overlooked, as the sustainability of a PA tourism system depends not only on the sustainability level of its individual subsystems, but also on their balance.
In order to propose an integrated evaluation approach that reflects the temporal evolution of the relationship among subsystems from the perspective of ecological-economicsocial coordinated development, we established a sustainability evaluation framework for the PA tourism system, which includes social, economic, and ecological subsystems, and identified a set of indicators in line with the development goals of sustainable tourism in the context of PAs by FDM. Subsequently, the CCDD and the obstacle degree model were used to reflect the temporal evolution of the sustainability of the reserve and identify the obstacle factors.
Our paper makes a significant contribution to the literature on three aspects. Firstly, the CCDD was introduced to assess the relationships among subsystems of a PA tourism system. While more studies have focused on large-scale tourist destinations such as cities (prefectures) and provinces or national-level destinations, our study specifically focuses on the PA tourism system. Secondly, we adopted the FDM to include scholars and administrators in the index selection process, which makes the CCDD more applicable to the PA tourism systems. This is a departure from the norm, as the indicators of CCDD are usually selected solely by authors, without engaging other stakeholders. Lastly, to improve the applicability and objectivity of the evaluation, we combined the analytic hierarchy process and the entropy method to determine the index weight, taking into account both index information and management concerns. The results show that this is necessary for diachronic evaluation and sustainability management of the PA tourism system.
---
Limitations and Future Research
Given the variety of PAs and their wide differences across countries and regions in natural ecological, social and cultural conditions, and tourist preference, the indicator system should be tailored to actual situations when applied in other PAs. In addition, a case study is not sufficient to draw general conclusions. Therefore, it is important to undertake more studies on tourism systems in different categories of PAs or PAs in different regions in the future to identify characteristics of subsystem relationships and obstacle factors.
For PA tourism systems, the coupling coordination degree assessment indicators should be adapted to the specific situations, such as conservation objectives and community conditions. Stakeholder participation is therefore crucial in selecting indicators. This paper involved administrators and related academics in the selection through the FDM. However, local residents in QLNR who generally speak the Tibetan language and have low literacy skills were not included due to difficulties in communicating and understanding of this method. Research in the future can include non-governmental organizations, tourists, local community residents and other stakeholders of tourism in PAs in selecting indicators and determining weights to better cater to local realities. In the expert consultation questionnaire, each expert is required to give a possible interval value [C i , O i ] and a definite value P i between C i and O i for each indicator to be evaluated, where i is an indicator to be evaluated, the minimum value C i is the "most conservative cognitive value" of i, and the maximum value O i is the "most optimistic cognitive value" of i.
The steps of questionnaire analysis are as follows.
Step In the expert consultation questionnaire, each expert is required to give a possible interval value [Ci, Oi] and a definite value Pi between Ci and Oi for each indicator to be evaluated, where i is an indicator to be evaluated, the minimum value Ci is the "most conservative cognitive value" of i, and the maximum value Oi is the "most optimistic cognitive value" of i.
The steps of questionnaire analysis are as follows.
Step 1: Conducting statistical analysis for each index i The extreme values other than "2 times standard deviation" were excluded, and then the minimum value (C i L, O i L), geometric mean value (C i M, O i M) and maximum value (C i U O i U) were calculated. The conservative trigonometric fuzzy function C i = (C i L, C i M, C i U) and the optimistic trigonometric fuzzy function O i = (O i L, O i M, O i U) were established (Figure B1). Step 2: Calculating the consistency degree of experts on indicators The grey zone was used to judge whether the expert opinions reached convergence and G i (representing the consensus degree of experts) was determined according to different situations.
If C i U ≦ O i L, G =
. It represents the non-overlapping interval of two trigonometric fuzzy functions, indicating that experts have reached a consensus on the index.
If C i U > O i L, it means that the two trigonometric fuzzy functions have overlapping intervals. When Z i (Z i = C i U -O i L) < M i (M i = C i M -O i M), it indicates that there are different Step 2: Calculating the consistency degree of experts on indicators The grey zone was used to judge whether the expert opinions reached convergence, and G i (representing the consensus degree of experts) was determined according to different situations.
If C i U ≤ O i L , G i = C i M +O i M 2
. It represents the non-overlapping interval of two trigonometric fuzzy functions, indicating that experts have reached a consensus on the index. If C i U > O i L , it means that the two trigonometric fuzzy functions have overlapping intervals. When Z i (Z i = C i U -O i L ) < M i (M i = C i M -O i M ), it indicates that there are different opinions among experts, but the difference is very small, and then G i =
O i M C i U -C i M O i L Z i +M i
. When Z i > M i , it tells that the opinions of experts differ greatly, and the above steps need to be repeated until the opinions of experts on all indicators converge.
Step 3: Calculating the threshold There are three commonly used methods for determining the threshold value (S): (I) According to established experience, set the threshold value at 5-7; (II) Determine S of indicator i by calculating the geometric mean of C i , O i , and P i , and then the geometric mean of the three geometric means; (III) Calculate the arithmetic mean value of P i as the threshold value. This paper chooses the second method, which is relatively objective.
Step 4: Index selection According to Table 2, after the first round of consultation, a total of nine indicators had M i -Z i < 0, indicating the expert opinions did not reach convergence. As a result of the second-round expert consultation, all the nine reached convergence, but seven with G value smaller than the threshold value S were deleted. After two rounds of fuzzy Delphi questionnaire surveys, 21 indicators were generated in total (Table 2).
---
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
---
Author Contributions: Conceptualization, X.Z. and L.Z.; methodology, X.Z.; software, X.Z.; formal analysis, X.Z.; investigation, X.Z. and L.Z.; data curation, H.Y.; writing-original draft preparation, X.Z.; writing-review and editing, H.Y.; visualization, L.-E.W.; supervision, L.Z.; funding acquisition, L.Z. All authors have read and agreed to the published version of the manuscript.
---
Conflicts of Interest:
The authors declare no conflict of interest.
---
Appendix A. The Questionnaire for Fuzzy Delphi Method
The questionnaire for experts on sustainability indicators of tourism in Qinghai Lake Nature Reserve.
---
Conflicts of Interest:
The authors declare no conflict of interest.
---
Appendix A. The Questionnaire for Fuzzy Delphi Method
The questionnaire for experts on sustainability indicators of tourism in Qinghai Lake Nature Reserve Dear experts:
We are researchers from ***. Due to the research needs, we are conducting a questionnaire survey on the sustainable development of tourism in Qinghai Lake Nature Reserve (QLNR). Please feel free to fill in the questionnaire anonymously and for scientific research purposes only. Your true opinions are very important for us to get objective and meaningful research conclusions. Thank you for your support and cooperation! Wish you good health, smooth work and a happy family! Instructions:
This questionnaire uses the assignment method (the score is divided from 0 to 10). The higher the number, the more you approve of using the indicator for evaluation. The smaller the number, the less suitable the index is for the sustainability evaluation of tourism in QLNR. This questionnaire uses the assignment method (the score is divided from 0 to 10). The higher the number, the more you approve of using the indicator for evaluation. The smaller the number, the less suitable the index is for the sustainability evaluation of tourism in QLNR. Each This questionnaire uses the assignment method (the score is divided from 0 to 10). The higher the number, the more you approve of using the indicator for evaluation. The smaller the number, the less suitable the index is for the sustainability evaluation of tourism in QLNR. | 48,085 | 1,746 |
6678363f1d16ea2e465277216f62186efdd28708 | Information and Communication Technologies in Social Work. | 2,010 | [
"JournalArticle"
] | are electronic tools used to convey, manipulate and store information. The exponential growth of Internet access and ICTs greatly influenced social, political, and economic processes in the United States, and worldwide. Regardless of the level of practice, ICTs will continue influencing the careers of social workers and the clients they serve. ICTs have received some attention in the social work literature and curriculum, but we argue that this level of attention is not adequate given their ubiquity, growth and influence, specifically as it relates to upholding social work ethics. Significant attention is needed to help ensure social workers are responsive to the technological changes in the health care system, including the health care infrastructure and use of technology among clients. Social workers also need ICT competencies in order to effectively lead different types of social change initiatives or collaborate with professionals of other disciplines who are using ICTs as part of existing strategies. This paper also identifies potential pitfalls and challenges with respect to the adoption of ICTs, with recommendations for advancing their use in practice, education, and research. | INTRODUCTION
Information and communication technologies (ICTs) are broadly defined as technologies used to convey, manipulate and store data by electronic means (Open University, nd). This can include e-mail, SMS text messaging, video chat (e.g., Skype), and online social media (e.g., Facebook). It also includes all the different computing devices (e.g., laptop computers and smart phones) that carry out a wide range of communication and information functions. ICTs are pervasive in developed countries and considered integral in the efforts to build social, political and economic participation in developing countries. For example, the United Nations (2006) recognizes that ICTs are necessary for helping the world achieve eight time-specific goals for reducing poverty and other social and economic problems. The World Health Organization also sees ICTs as contributing to health improvement in developing countries in three ways: 1) as a way for doctors in developing countries to be trained in advances in practice; 2) as a delivery mechanism to poor and remote areas; and 3) to increase transparency and efficiency of governance, which is critical for the delivery of publicly provided health services (Chandrasekhar & Ghosh, 2001).
With the growth of the Internet, a wide range of ICTs have transformed social relationships, education, and the dissemination of information. It is argued that online relationships can have properties of intimacy, richness, and liberation that rival or exceed offline relationships, as online relationships tend to be based more on mutual interest rather than physical proximity (Bargh, McKenna, & Fitzsimons, 2002). In the popular book The World is Flat, Thomas Friedman (2005) argues that collaborative technologies -i.e., interactions between people supported by ICTs -have expanded the possibilities for forming new businesses and distributing valued goods and services for anyone. Educational theorist and technologist Curtis Bonk recently published a highly insightful and influential book called The World is Open (Bonk, 2009). Bonk (2009) argues that, with the development of ICTs, even the most remote areas of the world have opportunities to gain access to the highest quality learning resources. Proceedings from the 2004 International Workshop on Improving E-Learning Policies and Programs also showed that ICTs are helping transform governments through workforce transformation, citizen education, and service optimization (Asian Development Bank Institute, 2004). Innumerable accounts and data sources demonstrate that ICTs have reduced boundaries and increased access to information and education (see Bonk, 2009;Friedman, 2005), which has led the United Nations Educational, Scientific, and Cutural Organization (UNESCO) to focus on assisting Member States in developing robust policies in ICTs and higher education (UNESCO, nd).
Although ICTs and the growth of the Internet are not without problems, a reality remains that both will continue to shape the global community. Other disciplines have recognized the importance of ICT and consider it to be a key part of professional development. For example, the National Business Education Association (NBEA) states: "mastery of technology tools is a requirement rather than an option for enhancing academic, business, and personal performance" (NBEA, 2007, p. 88). Resources are available that speak to the role of technology in the social work curriculum (e.g., Coe Regan & Freddolino, 2008;Faux & Black-Hughes, 2000;Giffords, 1998;Marson, 1997;Sapey, 1997) and in research and practice (e.g., Journal of Technology in Human Services). The National Association of Social Workers (NASW) and Association of Social Work Boards published a set of ten standards regarding technology and social work practice, which serves as a guide for the social work profession to incorporate technology into its various missions (NASW, 2005).
Despite this interest in technology, the attention that the field of social work has given to ICTs in research, education, and practice does not match the efforts of other national and international organizations that view ICTs as critical to improving the lives of disadvantaged and disenfranchised persons, and necessary for all forms of civil engagement. The Council on Social Work Education (CSWE) calls for the integration of computer technology into social work education, but there are no explicit standards for integration or student learning (CSWE, 2008; see also Beaulaurier & Radisch, 2005). Asking other social workers, social work students, and social work educators can easily reveal that many are unaware of the NASW technology standards. A review of syllabi of social work courses will also show that ICTs, beyond e-mail communication, are generally not present in the educational environment. Consequently, social work students are not being adequately prepared in the use of ICTs, which are integral in the workforce today and will become even more important over time (Parrot & Madoc-Jones, 2008).
In this paper, we argue that ICTs are of critical importance to advancing the field of social work. Specifically, they provide effecient and effective ways for organizing people and ideas, offers greater access to knowledge and education, and increases the efficiency and collaboration of our work. This paper takes the position that many aspects of the NASW Code of Ethics (1999) can be advanced through careful and thoughtful application of ICTs. Thus, competencies with ICTs and ICT literacy should be required learning outcomes in social work education and continuing education. This includes having the knowledge and skills to understand and use ICTs to acheive a specific purpose (i.e., competencies), in addition to knowing the major concepts and language associated with ICT (i.e., literacy). Within this framework, this paper identifies specific aspects of the Code of Ethics (1999), showing how ICTs play a critical role in achieving the desired values and principles. Recommendations on how ICTs can be more strategically incorporated in the classroom, along with potential pitfalls, are discussed.
---
OVERVIEW OF ICTs
---
ICTs in Society
Computer technology is becoming more efficient, productive, and cheaper. Advances in technology are producing more powerful computing devices to create a dynamic virtual network that allows people all over the world to communicate and share information with each other. The growth and importance of the technology and the virtual network are underscored by two important laws. First is Moore's Law, which states that "integrated circuit technology advancements would enable the semiconductor industry to double the number of components on every chip 18 to 24 months" (Coyle, 2009, p. 559). Essentially, this means that the speed and productivity of a computer increases two-fold every 1.5 to 2 years. While such growth may not be sustained indefinitely, the exponential growth of technology realized thus far has reshaped our society and will continue to be a dynamic force in future generations. It is important that social workers understand the role that technology plays in shaping the lives of clients and the services that are delivered. The second law, Metcalfe's Law, states "the value of a network increases in proportion to the square of the number of people connected to the network" (Coyle, 2009, p. 559). These rapidly developing technologies, and the individuals that utilize them, are producing virtual networks of greater size and value.
At the time Granovetter published his classic study on networks and employment (Granovetter, 1973), ICTs played almost no role in developing and maintaining network relationships. Today, Internet sites such as LinkedIn (www.linkedin.com) produce vast social networks that provide opportunities for professionals and employers to advertise and communicate. To effectively use social networks, whether for obtaining employment, securing resources, or obtaining information, social workers need to understand the capabilities of these networks, and how they can be effectively understood, managed, and utilized within a digital environment.
---
ICTs in Higher Education
Applications of ICTs for instituations of higher education have grown tremendously and will continue to shape the delivery of social work education. This is already realized through emerging distance education courses and other strategies for using technology in the social work classroom (e.g., Stocks & Freddolino, 1999;Wernet, Olliges, & Delicath, 2000). Courses offered online greatly assist students who are long distance commuters or students with disabilities. In both distance and local learning, many educators utilize course management systems (e.g., Sakai, Moodle, and Blackboard) for managing virtually every aspect of a course. These course management systems often provide students with tools to assist each other in learning the course material (e.g., synchronous and asynchronous communication). Largely because of these opportunities, some have even predicted that ICTs may eventually eclipse the traditional college classroom (see Bonk, 2009).
Within colleges and universities, ICTs serve both administrative and academic functions. Students are able to accomplish a variety of tasks using computer networks that save the institution time and money, such as facilitating billing and payments to the school, requesting and obtaining financial aid and/or scholarships, class scheduling, requesting official transcripts, selecting housing locations, etc. With regard to social work research, ICTs are part of an infrastructure for newer research methodologies (e.g. Geographic Information Systems, computer simulations, network modeling), making it crucial for universities to harness technology to advance their research missions (Videka, Blackburn, & Moran, 2008). ICTs have the potential to help facilitate a more productive and effective learning environment for both social work students and professors.
---
Continued Growth of ICTs
Technology innovations are encouraging a trend towards the digitization of the world's information and knowledge, essentially creating stores of the accumulated human experience (Coyle, 2009). Computer technology has become integrated into the modern global society, serving a wide range of functions and purposes. With such growth are extensive arguments that Internet access is a human right because it is necessary to fully participate in today's society. 1 The Federal Communications Commission (FCC) announced plans, in conjunction with the US Department of Agriculture and Rural Development, to create a national broadband internet policy to help ensure all United States citizens have equal access to high speed internet (Federal Communication Commission, 2009). This policy, made possible through the Recovery and Reinvestiment Act of 2009, is specifically tailored for citizens who live in rural or underserved areas (Federal Communucations Commission, 2009).
As the use of ICTs continues to grow, it is important to realize the importance of convergence, and how convergence shapes the transmission of information and service delivery. This concept refers to "the coming together of information technologies (computer, consumer electronics, telecommunications) and gadgets (PC, TV, telephone), leading to a culmination of the digital revolution in which all types of information (voice, video, data) will travel on the same network" (Coyle, 2009, p. 550). The creation and utilization of smart phones (e.g., BlackBerry, iPhone) is a key example of convergence, where one device has multiple functions and different applications, bringing technologies such as social networking, email, videorecording, and traditional cellular telephone service into one's pocket. Individuals of all age ranges are heavily involved in maintaining social connections through internet networks. For example, social networking websites, such as Facebook and MySpace, are used widely and boast highly active visitor populations. Facebook and MySpace each reached over 100 million active visitors by April of 2008 (Schonfield, 2008). The Internet and other telecommunication networks have an enormous impact on defining the future of human interaction, and to date, these changes have largely been positive across social contexts (Bargh, 2004). The field of social work needs to understand how these changes are influencing and will continue to influence all aspects of social work. As it relates to social work, it is critically important that such a research agenda builds an understanding of both the positive and negative impacts of human interaction.
---
ICTs AND SOCIAL WORK ETHICS
The growth of the Internet and use of ICTs has changed how we interact with each other and how we work (Bargh & McKenna, 2004). As the millennium generation (also known as generation Y) is raised in an environment with highly complex networks that make use of technology, their importance will continue to grow (Weller, 2005). The field of social work faces a critical need to incorporate ICTs into training social workers, delivering social work services, and the conduct of social work research. It is clear that ICTs, when thoughtfully and effectively used, can improve the various practice methods of social work (i.e., delivery of services, education, and research). Although the potential uses of ICTs have been well defined, to date there has been little discussion of the impact of ICTs on the principles of social work ethics. Provided below are specific examples of how ICTs appear necessary for ensuring the delivery of ethical social work practice. We highlight relevant aspects of the NASW Code of Ethics (1999) and provide specific examples.
Ethical Principle: Social workers recognize the central importance of human relationships. ICTs play a major role in human relationships, which has implications for social work practice. More specifically, increasing numbers of people are engaged in relationships that are mediated by some form of ICT, including electronic messages (email), SMS text message, social networking (e.g., Facebook), instant messaging service, or video chat (e.g., Skype). Social workers need to have an understanding of the roles that such ICTs may play in the lives of their clients. This may involve understanding how communication processes are different compared to face-to-face interactions; such as the use of emoticons -that is, characters and symbols use to express non-verbals.
Social workers also need to understand that many relationships develop and may occur exclusively online. For example, the Internet allows groups to convene around a common purpose, including the provision of self-help, social support, and psychoeducation. Depending on their format, such groups may be referred to as electronic groups, listservs, forums, and mail groups. The proliferation of these groups can be attributed to anonymity and their ease of access, particularly for persons with mobility problems, rare disorders, and those without access to face-to-face groups or professional services (Perron & Powell, 2008). A number of studies have tracked the patterns of communication within online groups, and have found that many of the processes used are the same as those used in face-to-face self-help groups (Finn, 1999;Perron, 2002;Salem, Bogat, & Reid, 1997). Given the prevalance of online relationships, social workers and other human service professionals must be aware of the positive (e.g., social support, see Perron, 2002), and negative effects (e.g., cyber-bullying, see Hinduja & Patchin, 2008) they have on their individual clients, with a clear understanding of how relationships are mediated by ICTs. Currently, the social work curricula emphasize the importance and development of in-person relationships, while little attention is given to understanding the role of online relationships and computer-mediated relationships.
Ethical standard 1.07: (c) Social workers should protect the confidentiality of clients' written and electronic records and other sensitive information. (l) Social workers should take reasonable steps to ensure that clients' records are stored in a secure location and that clients' records are not available to others who are not authorized to have access. Increasing amounts of information are being saved and shared electronically (Rindfleisch, 1997). While training social workers in in all aspects of information security would be impractical, it is necessary that they have requisite knowledge for raising fundamental questions about electronic security, and to know when and where to seek additional information. This is particularly true in agencies that lack funding and resources to support information technology specialists. Without this basic knowledge, social workers can compromise the confidentiality of their client records or other important organizational resources, resulting in significant legal consequences and ethical violations.
Ethical standard 1.15: Social workers should make reasonable efforts to ensure continuity of services in the event that services are interrupted by factors such as unavailability, relocation, illness, disability, or death. Natural disasters and personal factors can easily disrupt the continuity of social work services, and clients living in highly rural areas experience lack of services. ICTs provide options to help maintain or re-establish services during times of personal or community crises, which is described in numerous disaster management reports (e.g., Government of India, National Disaster Management Division, nd; United Nations, 2006;Wattegama, 2007). For example, if a service can be delivered electronically (e.g., psychotherapy) the only service barriers are ensuring that the client and service provider have computers or a mobile device with an Internet connection. Furthermore, the utility of virtual services such as remote psychotherapy (or more generally, "tele-mental health") is not limited to times of disaster. In fact, tele-mental health is used nationally for routine care in the Veterans Health Administration, in order to provide services to veterans in underserved areas (Department of Veterans Affairs, 2008.) To further illustrate the opportunity to deliver clinical services over ICTs, recent surveys estimate that about 60% of Americans used the internet to access health information in 2008 (Fox, 2009), and about half of all healthcare consumers endorsed that they would be likely to seek healthcare through online consultations if these services were made available (PriceWaterHouseCoopers Health Research Institute, 2009).
Ethical standard 2.05: Social workers should seek the advice and counsel of colleagues whenever such consultation is in the best interests of clients. ICTs offer greater flexibility and support for seeking professional consultations, and numerous states permit online supervision. The sheer size of the online world suggests that no matter how specialized one's area of focus, like-minded colleagues can be located, and communities of practice may be established. For example, hoarding behavior is a fairly rare event in mental health services, particularly in comparison to other expressions of psychopathology (Steketee & Frost, 2003). Thus, issues on treating this problem and working with family members are rarely covered in the classroom. In the absence of ICTs, few training or consultation opportunities exist, but a simple search of hoarding as a mental disorder can reveal a wide range of potentially useful resources (including, but not limited to): contact information for experts and directories on hoarding behavior; video lectures on treatment; extensive collection of YouTube videos on providing information and personal accounts; and online support groups. Similar searches of other highly specialized areas such as disaster planning in social work, forensic interviewing of abused children, and inhalant abuse have also revealed a wide range of resources that are unlikely to be available to social workers in their local area.
---
Ethical standard 3.07(a): Social work administrators should advocate within and outside their agencies for adequate resources to meet clients' needs. Creative uses of the
Internet are emerging to support advocacy. For example, the online service GiveAnon (http://givinganon.org/) uses the powers of ICT to allow donors to connect with recipients, contributing financially, directly, and anonymously. ICT's ability to mask the identity of an online person or entity is creatively used in this case to help donors to provide assistance without revealing their own identity. Thus, they can serve as a powerful organizing and advocacy tool. Social workers are positioned to use this tool, and many others like it, to address various needs and solve problems. Further integration of technology in the curriculum on organizing and advocacy with ICTs can have potentially significant payoffs. A recent article in a leading health services journal, Health Affairs, Hawn (2009) describes how Twitter, Facebook, and other social media are reshaping health care. At the time this manuscript was written, it was reported that Chicago's Department of Human Services began using a system that enabled human service providers, agency coalitions and the community to manage client and resource data in real-time (Bowman Systems, 2008). Having real-time knowledge of available resources is critical for making effective and efficient referrals, particularly for crisis issues, such as psychiatric and substance use conditions, and housing.
Ensuring adequate resources to meet clients' needs must be considered within the overall budget of an organization. ICTs are a necessary part of most social work service agencies. Many agencies have large expenses related to their ICT needs, especially software upgrades. However, organizations can take advantage of the benefits of open source software to decrease costs related to information technology. Open source software "is a development method for software that harnesses the power of distributed peer review and transparency of process. The promise of open source is better quality, higher reliability, more flexibility, lower cost, and an end to predatory vendor lock-in permits users to use, change, and improve the software, and to redistribute it in modified or unmodified forms" (Open Source Initiative, nd; see also Lakhani & von Hippel, 2003). From a user's standpoint, this software is freely available and can be modified to meet a given need. Many agencies use Microsoft Office but cannot afford expensive software or hardware upgrades that are required over time. As an alternative, the same agency could use an open source software package (freely available), such as OpenOffice (www.openoffice.org), which is compatible with the Microsoft Office suite.
Cloud computing alternatives are another option -that is, software services that are provided over the Internet. The premise of cloud computing is that full software packages (e.g. Office suites, database applications) are provided over the internet, eliminating the need for expensive equipment to be purchased and maintained locally (e.g., intranet servers; Hayes, 2008). Google, for example, provides an entire set of office-related applications called Google Docs (http://docs.google.com) that can do word processing, spread sheets, and presentations. These applications do not ever need to be installed on a local computer or upgraded by the user. These applications are compatible with other proprietary software, most notably Microsoft Office. Although not typical, this major Cloud computing service is freely available to anybody with a Gmail email account (also free), and the programs and files can be accessed from any computer with an Internet connection. Social workers should have knowledge of such resources and understand how they may be a reasonable alternative to address existing agency needs, in addition to understanding the legal issues of remote data storage and security.
Ethical standard 3.08. Social work administrators and supervisors should take reasonable steps to provide or arrange for continuing education and staff development for all staff for whom they are responsible. Continuing education and staff development should address current knowledge and emerging developments related to social work practice and ethics. A growing body of research shows that distance education can be as effective or more effective than face-to-face education (Bernard et al., 2004). Moreover, the educational literature is pointing to the changing characteristics of our students. For example, students of the Net Generation and Millenial Generation, who are the largest age group of consumers of social work education today, have different learning expectations and learning styles that will require social work faculty to change how they teach (see Diaz et al., 2009). Distance education is also increasingly relying on and innovating with ICTs, to facilitate student-to-teacher and student-to-student interactions, and collaborations. The field of social work could enhance its overall educational infrastructure through the effective use of ICTs. This would allow access to opportunities that would not be available or affordable using traditional face-to-face formats. The use of ICTs undoubtedly gives greater access to higher quality educational opportunities (Asian Development Bank, 2004;Bonk, 2009).
Ethical standard 4.01. Social workers should strive to become and remain proficient in professional practice and the performance of professional functions. Social workers should critically examine and keep current with emerging knowledge relevant to social work. Social workers should routinely review the professional literature and participate in continuing education relevant to social work practice and social work ethics. Social workers have a daunting task of remaining current with the research in their area of practice. The reality is that the majority of research findings are disseminated and accessed electronically via the Internet. Many of the barriers that social workers face in accessing and even understanding the research may be overcome, in part, through the efficient and effective use of ICTs. For example, while many journals require expensive subscriptions, a growing body of journals are available online in an open access format. This is an important and complex philosophy; the immediate relevance is that open access gives social workers free and unlimited access to scientific articles (e.g., www.biomedcentral.com) which have been traditionally been available on a subscription basis (see Suber, 2003). Social workers have access to a wide range of electronic video and audio recording, also known as videocasts and podcasts, that discuss recent research developments. For example, social workers interested in psychiatric issues can easily find collections of grand rounds lectures archived by departments of psychiatry at medical schools throughout the United States. Many journals and other science-related newsrooms offer scientific findings in the form of emailed newsletters and electronic news feeds. Social workers can identify and subscribe to specific news feeds using real simple syndication (i.e., RSS feeders) that link to news articles in their area of practice. These resources, and many others, are freely available. However, social workers must have competencies with ICTs in order to identify and use quality resources.
---
FUTURE DIRECTIONS Developing ICT Competencies and Literacy
Given the growth and impact of ICTs in society and their implications for social work ethics, it is critical that social workers have both competency and literacy with ICTs. While competency refers to being able to use a given technology, literacy refers to the ability to access, manage, integrate, evaluate, and create information (Chinien & Boutin, 2003). It is beyond the scope of this paper to provide a coherent and comprehensive strategy for developing social worker competencies and literacies with ICTs. However, the literature on ICTs and educational innovations in higher education provide extensive resources that are generalizable to the field of social work. Social work educators will need to be proficient with ICTs in order to design assignments, activities, and projects that reflect the real-world use of ICTs. Beyond higher education, continuing education opportunities that respond to recent technology advances are also necessary in order to help social workers stay current with the most relevant and useful technologies. For example, by having basic competencies and literacies, social workers and social work students who want further introduction to ICTs can review the complete curriculum materials for a course entitled ICTs in Everyday Life through the Open University (http://www.open.ac.uk/), in addition to having access to materials for other courses. This is part of the open education movement that views education as a public good, and Internet technology provides the opportunity to share, use, and resuse knowledge (Creative Commons, nd). In absence of ICT competency and literacy, social workers will miss important educational opportunities for themselves and their clients.
---
Challenges and Pitfalls of ICTs
Despite the continued growth and expansion of technologies, many disenfranchised and disadvantaged persons still do not have access to ICTs or the Internet. While initiatives in the United States, and other respective countries around the world, are attempting to provide access to everybody, significant disparities within and across countries exist, particularly in African regions that have low Internet market penetration (Alden, 2004). By developing a stronger focus and infrastructure around ICTs in social work education, social workers will be better prepared to participate in a range of policy initiatives to support activities that seek to address these disparities in social, economic and political participation.
In the training of social workers in ICTs, it is also important to recognize that not all technologies have resulted in added value to education. For example, Kirkup and Kirkwood (2005) argue that ICTs have failed to produce the radical changes in learning and teaching that many anticipated. This underscores the importance of ensuring ICT literacy among social workers -that is, having the ability to access and evaluate information using ICTs (Chinien & Boutin, 2003). This will help social workers select the optimal tools from a wide range of options.
In the provision of clinical services, social workers must be aware that clinical needs can be (and currently are being) met through technologies such as telehealth and e-mail consultations (McCarty & Clancy, 2002). Recent surveys also suggest that clients welcome these new treatment options (Fox, 2009). Further research is still needed to better understand the effectiveness of Internet-mediated services. For example, the effectiveness of online psychotherapy shows promise but the existing research to date remains inconclusive (Bee et al., 2008;Mohr, Vella, Hart, Heckman, & Simon, 2008). The social worker using such technologies must consider how legal, ethical, and social principles apply, in addition to the advantages and disadvantages of online health services (see Car & Sheikh, 2004). Currently, the social work curriculum focuses almost exclusively on relationships in the absence of ICT mediated exchanges, but the growth of technology within the health care system makes these matters a priority in social work education. If such issues aren't addressed, the field of social work is at risk of not remaining competitive in the provision of health and psychosocial services. Moreover, without proper training, social workers in this arena of practice are at risk of delivering poor quality services or facing legal or ethical issues.
Social work researchers and practitioners should work in earnest to document both the successful and unsuccessful initiatives involving ICTs in the field. Case examples can provide the basis for understanding how ICTs can be integrated to enhance various aspects of the process. Unfortunately, the current method of disseminating new information and practice is primarily through professional journals, where the general timeline of an article (the time it takes to have a manuscript submitted, reviewed, and subsequently published) will likely not be quick enough to keep up with the advances in technology. It behooves the field of social work to explore options to connect with other researchers and practitioners to share knowledge, particularly with social media.
---
CONCLUSION
The field of social work education, research, and practice is surrounded by rapid developments in ICTs. In order to ensure that social work practice upholds the standards and values of social work ethics, it is necessary that social workers are competent and literate in ICTs. This will position social workers at all levels of practice to help advance the lives of disenfranchised and disadvantaged persons through greater access to education, knowledge and other resources. While numerous ICTs have failed to realize their expected potential, the ongoing rapid growth of ICTs has created a context in which social workers cannot resist technology, but must understand the role it plays in everyday life.
---
Author's note:
Address correspondence to: Brian E. Perron, Ph.D., School of Social Work, University of Michigan, 1080 S. University Avenue, Ann Arbor, MI 48109. Email: [email protected]. | 33,826 | 1,202 |
8a2955224192cb6b7a6ff4aeceb65c632729ab8c | The green economy as plantation ecology: When dehumanization and ecological simplification go 'green | 2,023 | [
"JournalArticle"
] | The green economy is proposed as a solution to address growing and potentially irreversible ecological crises. But what happens when environmental solutions are premised on the same logics of brutal simplification and dehumanization that sustain and reinforce systems of oppression and ecological breakdown? In this article, we describe the transformation of the biophysical landscape of the planet into replicable blueprints of the plantation plot. The plantation as a colonial-era organizational template is an ongoing ecological process premised on disciplining bodies and landscapes into efficient, predictable, calculable, and controllable plots to optimize commodity production and is dependent on racialized and gendered processes of dehumanization. The visible cultural, physical, aesthetic, and political singularity of the plot, under the guise of objectivity and neutrality, permits a tangible depiction of the way ecological breakdown takes place. We interrogate the notion of "greening" as a strategy to combat the unintended impacts of colonial plantation ecology, arguing that such tactics further reinforce the template of plantation ecology rather than dismantle it. We first conceptualize the historical plantation and its biophysical, cognitive, and corporeal organizational principles. We then offer examples of "greening" as new, more inclusive (but equally detrimental) forms of plantation logics, and crucially identify how these extensions of plantation logic get co-opted by resistance agents, from social movements to disease and pestilence. We consider sustainability certifications of palm oil through the Roundtable on Sustainable Palm Oil (RSPO) in Colombia and compensatory afforestation programs designed | Résumé
L'économie verte est proposée en tant que solution pour faire face aux crises écologiques incessantes et potentiellement irréversibles. Pourtant, les solutions environnementales dominantes reposent sur les mêmes logiques de simplification brutale et de déshumanisation qui soutiennent et renforcent les systèmes d'oppressions sociaux actuels et l'effondrement écologique en cours. Nous décrivons la transformation du paysage biophysique de la planète sous forme des monocultures de plantation, un modèle transposable sans égards pour les réalités locales. La plantation, en tant que cadre d'aménagement territorial de l'ère colonial, est un processus écologique en elle-même, fondée sur la discipline des corps et des paysages en parcelles efficaces, prévisibles, calculables et contrôlables pour favoriser la production de marchandises par le biais d'une optique de déshumanisation racisée et genrée. La singularité culturelle, physique, esthétique et politique visible de la parcelle de plantation, laquelle se veut objective et neutre, offre une représentation tangible de la manière dont la dégradation écologique se produit. Nous nous interrogeons sur la notion de « verdissement » en tant que stratégie de lutte contre les impacts imprévus de l'écologie des plantations coloniales en soulignant que de telles tactiques renforcent la logique de la plantation plutôt qu'elles ne le démantèlent. Nous commençons par conceptualiser la plantation historique et ses principes d'organisation biophysiques, cognitifs et corporels. Nous proposons ensuite des exemples de « verdissement » en tant que nouvelles formes plus inclusives (mais tout aussi nuisibles) de logiques de plantation, et identifions comment ces extensions de la logique de la plantation peuvent être détournées par des agents de résistance, qu'il s'agisse de mouvements sociaux ou de maladies et d'épidémies. Nous nous basons sur les certifications de la production de l'huile de palme dans le cadre de la Table ronde sur l'huile de palme durable en Colombie, ainsi que sur des programmes de reboisement compensatoire conçus pour compenser la destruction des forêts en Inde. Nous terminons en soulignant comment les écologies d'abolition peuvent servir de contrepoids à la logique des plantations en mettant en lumière les relations primordiales d'autoréflexivité, de réparation et de solidarité collective requises pour se désinvestir de l'écologie des plantations. Mots-clés: écologie politique, capitalisme, économie verte, racisme, Capitalocène
---
Resumen
La economía verde se presenta como una solución para enfrentar el creciente y potencialmente irreversible colapso ecológico. Sin embargo, ¿qué sucede cuando las soluciones ambientales se fundamentan en las mismas lógicas de brutal simplificación y deshumanización que mantienen y refuerzan los sistemas de opresión social y degradación ecológica? En este artículo, describimos la transformación del paisaje biofísico del planeta en patrones replicables de la parcela de la plantación. La plantación, como un modelo organizativo arraigado en la época colonial, representa un proceso ecológico continuo que se fundamenta en la reconfiguración de cuerpos y paisajes en parcelas eficientes, predecibles, calculables y controlables con el fin de optimizar la producción de mercancías, basada en la deshumanización racializada y de género. La evidente singularidad cultural, física, estética y política de la parcela, a pesar de su aparente objetividad y neutralidad, ofrece una representación tangible de cómo se manifiesta la degradación ecológica. En este artículo, cuestionamos la noción de "reverdecimiento" como estrategia para contrarrestar los efectos indeseados de la ecología de las plantaciones, argumentando que dichas tácticas refuerzan el modelo de plantaciones en lugar de desmantelarlo. En primer lugar, conceptualizamos la plantación histórica y sus principios organizativos, tanto biofísicos como cognitivos y corporales. Posteriormente, presentamos ejemplos de "reverdecimiento" como nuevas formas aparentemente más inclusivas pero igualmente perjudiciales de la lógica de la plantación. Finalmente, identificamos cómo estos aspectos de la lógica de la plantación son apropiados por actores de resistencia, desde movimientos sociales hasta enfermedades y epidemias. Consideramos como ilustraciones las certificaciones de sostenibilidad del aceite de palma a través de la Mesa Redonda sobre Aceite de Palma Sostenible (RSPO) en Colombia y los programas de reforestación compensatoria diseñados para contrarrestar la destrucción de los bosques mediante la expansión de plantaciones de monocultivos en India. Concluimos resaltando cómo las ecologías de abolición pueden servir como un antídoto contra la lógica de las plantaciones y destacamos las relaciones necesarias de autorreflexión, reparación y solidaridad colectiva requeridas para alterar la lógica de la ecología de las plantaciones. Palabras Clave: ecología política, capitalismo, economía verde, racismo, Capitaloceno
---
Introduction
In response to climate crisis and ecological breakdown, green transitions are being increasingly demanded by multilateral environmental organizations, scientists, policymakers, global lending agencies, and corporations alike. Proposals such as 'green growth' and a 'green economy' build on a popularized sustainable development discourse by claiming that growth can and must continue but be 'smarter' at internalizing unintended environmental side-effects -or externalities -into the economy. Renewable energy, certified niche products, and financialized Environmental, Social and Governance (ESG) portfolios are examples of how green products are leveraged to generate and capture new value and profit. Yet, the production of goods and services ("green" or otherwise) has its own ecological consequences. The desire to grow greener has meant the active manipulation of landscapes and labor relations to generate measurable (and lucrative) productive commodities in the name of sustainability (Neimark et al., 2021;Voskoboynik & Andreucci, 2022;Bigger & Webber, 2021). The "greening" agenda has not considered its own ecological effects beyond marginal efficiency improvements; this is because the underlying logic that drives intensive production systems erases and normalizes the global historical and colonial foundations of ecological breakdown (Sultana, 2022). This has been eloquently articulated by political ecologists and critical geographers in past decades (e.g. Sullivan, 2018;Andreucci et al., 2017;Pulido, 2017;Dempsey & Suarez, 2016;Büscher et al., 2014;Fairhead et al., 2012;Bakker, 2010;Smith, 2010).
In this article, we analyze an organizational template that has shaped and continues to shape landscapes and labor relations over the past five centuries: the plantation. Plantations -historical and contemporary -are situated in particular geographies and linked to expansive supply chains and markets. The uniform monoculture of plantation ecology attempts to scrub away any historical register of place by treating land as terra nullius, devoid of cultural significance, use or value other than for the extraction of specific commodities (Lindqvist, 2014). Consequently, people and non-human nature are violently detached from their communities and relations, extending monoculture beyond a production model. Here monoculture also refers to the imposition of singular ways of understanding the world and patterns of thought; universal, linear, and fixed conceptions of time and space (e.g. Shiva, 1993;Castree, 2009;Escobar, 2018); the imposition of a 'settler' distancing from nature ('something for the taking') (Burow et al. 2018); and structured and hierarchized categories of classifying people along racial, caste, ethnic, and gendered lines to optimize the instrumentalization of their labor (Ferdinand, 2019). However, this is not a complete or smooth process, and is rife with struggles for autonomy and subversion from people and polycultures alike (Tsing, 2015).
We explore these contested landscapes through the lens of plantation ecology. Plantation ecology stands in stark opposition with ecologies that generate the conditions for abundant life to thrive, or a world where many worlds can co-exist (Escobar, 2018). Plantation ecology refers to the historically and geographically situated plot, defining how and where capital production intervenes in the web of life, while attempting to enroll emergent life into new plantations. Rather than amorphously reproducing the wheel as 'Plantationocene', plantation ecology should rather be understood as the set of dehumanizing ecological relationships that define capital accumulation in the web of life, or 'Capitalocene' (e.g. Moore, 2015). Since plantations signify a spatialized geography or physical plot of capital production, the term 'Capitalocene' more appropriately characterizes the underlying processes qualitatively (and irreversibly) transforming the web of life. These include the degradation of dehumanized bodies as cheap racialized labor, the violent homogenization of whole landscapes, and "just in time" production of new commodities to power global markets (Wolford, 2021;Davis et al., 2019;Sapp-Moore et al., 2019;Moore, 2015;Haraway, 2015;Haraway et al., 2016;McKittrick, 2013). Plantations are like templates shaping how commodity production is physically mapped onto the landscape, seascape, and even (increasingly) the spacescape. While maintaining the homogeneity of monoculture, they widen and deepen the commodity frontier -the process of accumulating value into and through new goods and services -and tap into emergent values and superficial representations of virtue and aesthetic judgements of beauty and taste.
Plantation ecology has its roots in the colonial enslavement of African people as dehumanized, laboring bodies to produce raw goods in colonized landscapes for manufacturing hubs in urban centers in North America and Europe (McKittrick, 2013). Colonial expansion gave credence to wealthy capitalists in Europe to justify colonial subjects as darker-skinned sub-humans that were indolent, ignorant, dangerous, immoral and hence equivalent in stature to manipulable objects of nature (Koshy et al., 2022). European elites also leveraged racialized exploitation in overseas colonies to sustain class-based exploitation of working-class laborers within Europe. The abject dehumanization of millions of people through chattel slavery ensured a reliable and gratis labor force to funnel trillions of dollars in accumulated value from supply chains to the European capitals and their colonial outposts (Nally, 2011;Craemer et al., 2020). The profits and power relations generated by this system continue to shape the world today. Between 1990 and 2015, wealthy nations appropriated 12 billion tons of raw materials, 822 million hectares of land, 3.4 billion barrels of oil, and 188 million person-year equivalents of labor from former colonies and other nations distinguished along the racial color line (Hickel et al., 2022). Devaluation -or making inputs to production of less worth -is a functional property defining the ecological simplification and decimation of non-commodifiable life on the plantation. While quantification of resource and labor appropriation is beyond the scope of this article, we illustrate how "greening" solutions continue to embed devaluation, or the 'cheapening' of nature, life, and labor, as an organizing principle, further sustaining and reinforcing plantation ecology as the outcome of organizing land and labor for the elite capture of value (Moore, 2015).
In the next section we further expound on plantation ecology as an organizing principle causing ecological breakdown, irrespective of whether commodity production is "green" or otherwise. Borrowing from Ritzer (2018), we frame our reflections around four plantation design principles of efficiency, predictability, calculability, and control of both resources and labor for optimal commodity production. By optimizing the production of commodified goods and services (eco-friendly, socially disruptive, or otherwise), we argue that these principles characterize an ecology in their own right and attempt to further cement monocultural social and natural environments. Building from the four design principles of the plantation, we then illustrate in Section 3 how "greening" aids the expansion of plantation ecology by geographically widening and deepening the commodity frontier. In Section 4 we draw upon examples from afforestation in India and sustainability certification in Colombia and Indonesia. These examples illustrate how resistance emerges amidst the continual failures of monocultural uniformity repackaged as "green." We conclude in Section 5 by inviting space for relationships of self-reflection, repair, and solidarity needed to build the abolition ecologies that obstruct the will towards singularity and sameness and the violent oppressions these entail.
We invite researchers, activists, and civil society -inside and outside academia2 -interested in interrogating "green" solutions to consider how plantation ecology is a common denominator exacerbating climate and ecological breakdown. Resisting and dismantling plantation ecology can form a conceptual basis for building place-based solidarity against systems of oppression and for regenerating abolition ecologies. Abolition translates as restorative justice and the freedom to live away from environmental harm, racial discrimination, unjust gendered forms of labor, and class subjugation and constant threats of incarceration (Heynen and Ybarra, 2021;Gilmore, 2007;Pellow, 2019). Recognizing what Ferdinand (2019) claims is the tendency of environmental thought and anti-colonial thought to speak past each other, we hope these dialogues nurture transformative and collaborative thought and action.
---
Plantation ecology
The logic of the plantation operates as an organizational template historically shaping societal and ecological relations through the discipline of commodity production and pervasive dehumanization. Plantations are grounded in specific territories but are multiscale and linked globally across supply chains as well as the exploitation of racialized and gendered bodies as dehumanized labor, whose living relational connections to territory and knowledge systems are repackaged and made deadened as resources for commodity production (Yusoff, 2018). The plantation should be understood through the manifestation of a temporally and spatially specified plot, making and shaping monocultural environments with the express purpose of capital accumulation (Yusoff, 2018;Wynter, 1971). In turn, capitalism does not merely generate ecological consequences or "externalities," but is itself an ecological process generating and profiting off its own internal contradictions. This latter point is what has been termed capitalist world ecology -the dual interaction of human activity and environmental change as the production of capital in the web of life (Moore, 2015).
The outcome of this globalized world ecology has resulted in the vast terraforming of the earth's surface. These extend to monocultures of industrial agriculture and timber, processing and manufacturing factories, and mine sites. Less often perceived as extractive but operating under the same principles of enclosure include tourist resorts and nature parks, gated communities, and whirring and energy-intensive "cloud" servers. Seascapes are also integrated through transnational shipping networks, timed to brutal same-day delivery schedules (Ajl, 2021). Commodified outputs from the plantation emerge across a series of productive processes including the direct production of goods and services across multiscalar supply chains, their financial derivatives (e.g. futures trading, crop insurance markets, green bonds, climate-smart adaptation funds, speculative climate finance), the disposal of wastes (e.g. the e-waste, circular economy, and recycling industries), the securitization and militarization of plantation borders (e.g. industrial prisons and detention centres), and secondary appropriation of surplus value through activities such as rent seeking (e.g. carbon offsets, recreational tourism, eco-gentrification in urban areas).
As Wolford (2021: 7) states: "class, gender, and racial divisions were not invented for the plantation but in many ways, they were perfected there -strict hierarchies were laid down, justified and often internalized." It is important to emphasize that the "plantation" is not a synonym for "capitalism" but has been a kind of laboratory to position class relations, including racialized and gendered forms of dehumanization, as central to ordering people and nature alike for optimal commodity production. The establishment of racialized hierarchies of labor is but one (extremely brutal) process of class differentiation in optimizing the production and accumulation of capital (Koshy et al., 2022). Gendered divisions of labor underpin class differentiation through the exploitation of social reproduction and form the kernel of modernity's patriarchal origin (Mies, 2014, von Werlhof, 2013). Racialized and gendered subjugation to less-than-human status acts to proximate workers as equally manipulable as that of presumed non-human natures, perceived as passive resources (McKittrick, 2015, Yusoff, 2018). To the extent that green strategies abolish class and patriarchal relations is to realistically assess how these initiatives ecologize anew in particular places and settings, or conversely further pattern plantation ecology in more deceptively inclusive ways.
In what follows, we first conceptualize four organizational principles of plantation ecology that characterise its precision and replicability. Then we analyse how these principles widen and deepen commodity frontiers, and the role that "greening" plays in expanding plantation ecology. We demonstrate that "greening" not only fails to disrupt global and multiscalar links of plantation discipline, but actively aims to reinforce and expand them in the name of minimizing risks to disruption (i.e. sustainability as sustaining the status quo).
---
The organization of plantation ecology
Sociologist George Ritzer (2018) noted how every aspect of society was rapidly following a blueprint resembling the experience of being served in a fast-food McDonalds restaurant. He identified four intertwined organizational principles, which Desmond (2019) traces through to the cotton fields of slave-owning plantations of the U.S. South and contemporary capitalist work culture. These four dimensions, which we now turn to, are efficiency, calculability, predictability, and control (Ritzer, 2018).
Efficiency refers to obtaining the maximum amount of product or objective in the shortest time or cost possible. In theory, maximizing benefit and minimizing cost is a desirable objective, especially when considering the rapid social and ecological hemorrhaging occurring. When efficiency is applied in the context of plantation ecology, it refers to maximizing commodity production by reducing or further depriving the natures that make up the plantation's workforce (human and non) or by maintaining output through reductions in labor and resource costs (Shove, 2018). There is never a genuine attempt to become more efficient at a systemic level when unlimited growth and capital expansion is the aim, but only attempts to make material and energy extraction for commodity production quicker and more optimized. In this way, efficiency gains are immediately translated into new investable resources to expand production. This is a contradiction that English economist William Stanley Jevons had already identified in 1865 (Dale et al, 2017).
Efficiency in achieving desired objectives, with either minimal cost, maximal potential to extract profit or both, is a predominant feature of economic justifications associated with "internalizing" environmental externalities. For the green economy, efficiency is invoked through the argument that the world's life support systems can and must be protected if (and only if) their expected returns are higher than any alternative use. For instance, Waldron et al. (2020) highlight how "nature protection" as a green financial market could increase total global economic output by upwards of US$454 billion per year by 2050 and possibly up to US$ 1 trillion annually if remaining areas of the earth not currently under industrial production could be framed as a "single underexploited type of asset" (p. 11). The authors employ this efficiency-oriented argumentation to underpin the adopted Global Biodiversity Framework at the most recent Conference of the Parties to the Convention on Biological Diversity in Montreal (COP15 in 2022).
The second dimension is calculability. On the historical plantation, enslaved people's laboring potential was meticulously documented by plantation owners according to age, gender, and health status. In the shaping of plantation ecology, calculability is the capacity to quantify every aspect of the process of "product" delivery in terms of measurable indicators and targets, including increasingly creative ways to represent relational and subjective experience through quantified parcels of data. Desmond (2019) argues that the "cold calculation" in the control and precision of the laboring body has not altered since the days of exacting maximal labor per slave on historical plantations, but only that technology has become more sophisticated. These practices include surveillance of workers' emotional state to optimize productivity (e.g. Kaklauskas et al., 2011), upwards accountability and hierarchical reporting, achievement of ever-precise indicators and targets, and the overall precise quantification of output per unit of salary paid. Mbembe (2019: 14) refers to such measurement as a process in which all life itself becomes a "computational object" to be inserted into an algorithm to minimize costs and maximize labor potential.
For the green economy, calculability is the capacity to quantify every aspect of the process of "product" delivery in terms of measurable indicators and targets, including relational and subjective experiences. Calculability lies at the heart of the logic underpinning carbon emissions trading schemes, which invent "measurable 'equivalences' between emissions of different types in different places" irrespective of context (Lohmann, 2009: p. 81). To illustrate this absurdity, a carbon molecule emitted by a hospital treating desperate war-torn patients in Aleppo becomes both qualitatively and quantitatively equivalent to a carbon molecule from a billionaire's yacht cruising the South Pacific. Or, in India's compensatory afforestation programme, the loss of 166 sq km of tropical rainforest on Great Nicobar Island is planned to be compensated with an equivalent amount of monoculture tree plantation 2,400 km away (e.g. Narain, 2023). Climate loss and damage compensation, forest or wetland loss, or even discussions on historic loss and damage, similarly tend to quantify otherwise incommensurable physical loss, cultural genocide and ecocide through monetary compensation or arbitrary equivalencies-excusing systemic changes and leaving power relations unchanged.
The third dimension, predictability, ensures that product delivery or public policy is homogenized for consistency and buy-in. Without collapsing difference, the deviation of laboring bodies from a standardized formula of expected future production gains aligned to a mechanical clock time characterizes both the abject violence against laboring bodies on the historical plantation (e.g. Smith, 1997) as well as present-day industrial production discipline (e.g. Nanni, 2017). The insidious case of 133 enslaved Africans thrown overboard the Zong slave ship in 1783 to collect on insurance claims illustrates how important predictable delivery of private financialized human bodies was for plantation ecology (Sharpe, 2016). Similar to how milk gets dumped, or pigs get slaughtered during bottleneck delays in the supply chain (such as during the COVID-19 pandemic), rough seas, mutinies, and weather delays threaten(ed) the predictability of fully productive, dehumanized labourers to meet expected production of plantation crops.
Producing predictable outcomes out of increasingly unpredictable climates continues today in the green economy. Paprocki (2018), for instance, illustrates how "climate adaptation" projects have been strategically targeted to depopulate coastal areas of Bangladesh to both dispossess small-scale fishers of their territories and cultural sovereignty, sucking them into precarious wage-labor relations in peri-urban slums, while simultaneously awarding contracts for lucrative sea wall construction projects financed by foreign investors. Like the rough waters of the Atlantic during the slave trade, climate change adaptation has become an opportunity to turn unpredictable risk into new value streams and new spinoffs of the plantation. Predictability is also crucial to justify returns on eco-investment associated with strategies like climate finance and carbon futures markets. For instance, commodity finance analysts have assessed the predictability of returns in carbon and ESG-related investment portfolios vis-à-vis other capital markets (Cornell, 2021;Cappucci, 2018). Verifiable offsets that avoid double-counting and ensure additionality -e.g. carbon sequestration that would not have happened without the offset, are major conundrums for climate financiers. Speculative finance in climate-smart real-estate and infrastructure depends on predictable returns of investment, irrespective of context, culture, history, climate, or underlying socio-political tensions and dynamics (Scoones and Stirling, 2020).
The fourth dimension refers to control in maintaining the conditions of plantation (re)production and aligns with Mbembe's (2003) necropolitics to understand how hegemony on the historic plantation is sustained, as well as how rights to thrive are distributed to a few at the expense of the (slow) death by exploitation of countless others. Although control is presented here as a dimension parallel to the others in sustaining plantation precision and replicability, it might also be conceived as a form of biopower deployed across physical (e.g. military-industrial and surveillance technology) and cognitive landscapes to internalize or normalize the other three dimensions. Control operates through adherence to established path dependencies, including along lines of racial purity, patriarchy, and prioritizing settler futures (Mitchell and Chaudhury, 2020;Duncan, 2019). These can range from coercive social norms, formal laws and regulations by the state, scientific expertise, and nation-building discourses defining what is considered appropriate courses of action within the cognitive, cultural, and physical boundaries of hegemonic plantation logics. As political ecologists have long argued, the apparatus of science is weaponized to both build further on and improve the technics of governance that ultimately maintain or strengthen control over society (Scoones et al., 2015;Robertson, 2012;Jasanoff, 2004;Mumford, 1964). In this sense, the dimension of control is a clear exercise of power that makes it appear as though the contradictions of the plantation can always be 'rendered technical' (Li, 2007) but necessarily managed within the confines of the plantation itself.
In these contexts of discursive, political and economic hegemony, ecology is easily weaponized towards ecofascist agendas (Moore and Roberts, 2022). "Invasives" on the plantation, for instance, reflect non-human natures like pathogens, pests, and parasites that threaten expected yields and ultimately commodity futures markets. They may also include perceived threats from Indigenous and other subaltern groups whose inclusion in the club of "Humanity" (capital H) stands in the way of more efficiently exploiting their labor. The convergence of xenophobic nationalism and neoliberal capitalism (e.g. Arsel et al., 2021) is a holdover to a supposed 'golden era' of the historical plantation where all resistance could be violently suppressed. "Greening" interventions cannot be viewed in isolation from the expulsion of migrants, construction of border walls, forprofit anti-black carceral plantations continuously churning out cheap labor, control of women's reproductive rights, and everyday intimidation by police and paramilitary forces. These work in concert to reinforce plantation discipline, even in more eco-friendly and net-zero forms, and the ongoing colonial and patriarchal project that they underwrite (Arboleda, 2020;Federici and Linebaugh, 2018;Ferguson and McNally, 2015;Gilmore, 2007).
---
Expanding the commodity frontiers of the plantation through the green economy
While the four principles of plantation discipline described above help us to understand the culture and technique shaping plantation ecology, the frames of commodity widening, and commodity deepening illustrate how plantation discipline operates geographically and historically. The process of enrolling biophysical materials and laboring bodies into production takes place at the commodity frontier (Swyngedouw, 2006). 3 This frontier refers to an "underutilized" "outside", where relational values between people and non-people are violently subjugated to that of property and commodities for exchange value (Moore, 2010). The advancement of this frontier is manifested as resource imperialism, proceeding through militarized expansion across territorial space and dispossessing people of their sovereignty and relational entanglements to life (Harvey, 2005). The expansion of this to new places is what Moore (2015) refers to as "commodity widening." Commodity widening usurps land and its inhabitants and attempts to fold them into the efficient, calculable, predictable, and controllable social relations required for capital accumulation. Meanwhile, "commodity deepening" refers to hyper-intensified processes of producing commodities that are more refined, adaptive, and resilient to crises, without necessarily expanding production geographically. In the two sections below, we connect each of these frames with our discussion on plantation ecology and the green economy through the examples of climate debt and climate-smart agriculture respectively.
---
Commodity widening
The relation of commodity widening processes to historical plantations explains their geographic spread, particularly through the settler colonial occupations of the Americas, resulting in Indigenous genocide and an orchestrated global slave trade that set the wheels of white supremacy into motion. As Zuberi and Bonilla-Silva (2008) argue, once Africans were emancipated from slavery in the West in the 19 th century, resource imperialism and colonial subjugation continued and accelerated across commodity frontiers in Africa and Asia during the 20 th century and beyond. Commodity widening usurps land and its inhabitants and attempts to fold them into the efficient, calculable, predictable, and controllable social relations of the industrial plantation as described above. Banoub et al. (2020) identify how commodity widening takes place through a process of discovery, selection, and exclusion in the acquisition of vast new terrain for commodity production. The authors emphasize the spatial and temporal malleability of material natures as a function of their physical qualities as well as the labor to optimize the production of surplus value. Goods and services produced under the green economy, such as lithium batteries for electric vehicles or carbon offsets from tree plantations, follow in the practice of commodity widening, beginning with enclosure or resource capture of lithium or sequestered carbon stocks from common or customary land relations to private property regimes. Consequently, commodity widening is prone to what has been termed "green grabbing" (Fairhead et al., 2012).
Commodity widening is tightly linked to low-interest bank loans and expanding relations of debt. Bank loans by colonial creditors and resulting debt bondage financed vast slave-owning plantations for commodity crops in the US South, the Caribbean, South Asia, North Africa, the Malaya peninsula and elsewhere (Upadhyaya, 2004;Harvey, 2019). In turn, debt-fueled commodity production to pay back creditors has pushed the frontiers of commodity expansion into new territories, disrupting already existing human-nature relationships, generating further ecological degradation and new speculative opportunities for investment in green financing like climate-smart agriculture to address the continuous environmental contradictions of production. These new speculative opportunities and the low-interest loans they encourage further the debtcommodity expansion of the agricultural commodity frontier, kicking the can of environmental problems further down the road, and the debt repayment-commodity frontier expansion continues ad nauseum.
Since the 1990s, national debt relief through conservation agencies working with creditors in Europe and North America has been a popular approach for nature protection. These 'debt for nature' swaps involve writing-off sovereign debt by a creditor country, or conservation NGO working on their behalf, in exchange for conservation projects, thus offering economic "wiggle room" for countries to invest in ecological transitions (Svartzman & Althouse, 2022). Countries must achieve conservation outcomes, like the expansion of protected areas, by specific deadlines in these swaps and therefore must raise sufficient conservation finance to do so, often through the form of government loans or bonds devoted to terrestrial (e.g. "green") or marine (e.g. "blue") conservation agendas. These have grown in the wake of the COVID-19 pandemic (Akhtar et al., 2020), with new deals being arranged with Belize, Zambia, Ecuador, and Barbados. based) industries, adept at exploiting the value generated by conservation imagery, qualitatively transform the previous ensemble of situated ecological relations to put the terms and conditions of capital accumulation first.
The outcome of these swaps results in exchanging one type of debt for another, allowing holders of green or blue bonds to profit from lucrative nature conservation strategies -including through real-estate speculation from conservation-based tourism. While provisions can be made to foreclose social harm to marginalized populations, there is no requirement that this takes place. Similar strategies of debt-driven "greening" have come in the name of so-called "nature-based solutions" that disguise large-scale infrastructure projects under the banner of environmental consciousness (Chausson et al., 2023).
Commodity widening also takes shape from the value grabbing of untapped rent value from nature (e.g. Andreucci et al., 2017;Fairhead et al., 2012), fueling ecologically and socially damaging economic spillovers like real-estate speculation (e.g. Gillespie, 2020). Rent refers to the instituting of property rights not used exclusively for new commodity production, but to extract value benefiting from aesthetic qualities, including the nebulous notion of being "nature positive", prime locations, cultural characteristics, carbon sequestration potential, or other positive externalities (Andreucci et al., 2017). These may result in exchanging carbon credits or certifying products or landscapes as "eco-friendly." Commodity widening through value grabbing from rent caters to morals, ethics and even calls for justice. Capitalizing on rent value requires reserve armies of lowskilled and precarious workforce to manage landscapes for nature-based solutions and palatably labelling them as green jobs (e.g. Neimark et al., 2021). The efficiency, calculability, predictability, and control dimensions of plantation ecology become best suited in locations where labor costs are low and the consumptive values of treating nature as an asset class are most optimal.
---
Commodity deepening
Commodity deepening occurs when spatial extensification of new territories is no longer possible. The commodity frontier advances through intensification that ramps up and hastens production. This involves technological innovation to further capitalize on otherwise difficult to obtain cheap natures and labor potential, identify and exploit surplus value and to further centralize control (Arboleda, 2020). In the case of agriculture, this commodity deepening process takes place through mergers or agreements between retailers, fertilizer and pesticide suppliers, shipping and seed companies, big tech digital agriculture platforms, multilateral banks, and "sustainable" development finance (Banoub et al., 2020).
Some examples of commodity deepening include: artificial intelligence technology to identify and extract difficult-to-reach mineral ores and oil sands, genetic breeding of climate and pest-resilient crops, optimized exploitation of (now depopulated) commercial fish through aquaculture, shortening poultry production schedules through injections of ever-specialized hormones, or the use of drones and field sensors that provide data on soil conditions, fertilizer requirements, and monitor pests and many more (GRAIN, 2021). In terms of labor, commodity deepening has meant greater surveillance of individual productivity, stronger captivity of workers to dependency on high-interest credit lines and mounting debts, greater fragmentation of laboring classes through outsourcing across global supply chains and the disruption of meaningful union organizing of workers across these disparate chains. In short, plantation ecology is further deepened and reinforced through greater control over productivity to enhance the pace, direction, and consistency of surplus value generation (Banoub et al., 2020).
One example of commodity deepening of plantation ecology emblematic to the green economy is the deployment of climate-smart agriculture. Touted discursively and institutionally by both governments, agribusiness, and among multilateral development and aid agencies, climate-smart agriculture leverages upon the branding of climate solutionism to further intensify industrial crop production through bioengineered crops. It wields already existing practices like herbicide usage for pest resistance and rebrands them as "climatesmart", reducing the need to till soils and release stored carbon (GRAIN, 2021). Yet enrolling these rebranding techniques and engineered technologies into plantation production systems directly and indirectly exacerbate the ecological breakdown they are meant to address. For instance, applying formulated herbicides to target particular pathogens has, in some cases, permitted these very pathogens to evolve and mutate in ways that adapt to the genetic selection of whole crops or livestock engineered to thrive with continued applications of these herbicide or antibiotics (Wallace, 2020). As the recent Covid pandemic painfully demonstrated, these risks (e.g. pathogen outbreaks and climate change) are ultimately offloaded onto workers of the plantation.
Consequently, production relations of the plantation not only do not change but are further securitized and entrenched.
Commodity deepening thus exerts a discursive, institutional, and material power to obscure existential risks that might alter the discipline of plantation ecology (Newell and Taylor, 2018). It rather redeploys concepts like regeneration and climate resilience in service of justifying new or existing commodities produced under already existing modes of plantation discipline, monocropping, financialized speculation and debt. Above all, commodity deepening does little to nothing to alter uneven patterns of value accumulation that accrue to end users of supply chains rather than returned to workers of the plantation (both human and non). While marginal material and energy efficiencies may result, the overall outcome is the expansion of yields and more efficient "just-in-time" delivery to retailers and consumers, especially when the same digital technologies are tied to algorithms for consumption preferences before consumers even know they desire something (GRAIN, 2021). Ultimately, such green branding for material and energy efficiencies is overwhelmed by faster economic throughput or rebound effect, making it even more difficult to transform production relations of plantation ecology that cause social and ecological harm (Nasser et al., 2020). Commodity deepening is metaphorically the act of digging a deeper hole to pull oneself out of it.
Regardless of the extensive or intensive nature of surplus value generation (e.g. commodity widening or deepening), the subjugation of human and non-human bodies as devalued natures is crucial to the process of how plantation ecology becomes inscribed as the Capitalocene in the web of life (Moore, 2015). Figure 1 illustrates the characteristics of plantation ecology as thus far described. Both commodity widening and deepening involve financial speculation on expected future profits in light of uncertainties and risks. In doing so, both processes attempt to hold the future hostage by already foreclosing the agency of unborn non-human natures and other lifeworlds (Mitchell and Chaudhury, 2020;Whyte, 2017). The key here lies in the attempt. While this does not deny the variable of success in erasing and subduing lifeworlds as novel, disruptive, or innovative assets produced in plantations, it also reveals the systemic failures that are working to undo plantation ecology itself.
---
From "greening" ecology to subjecting "green" to ecology
For all its seeming pervasiveness, plantation ecologies are contradictions. By constantly generating social and ecological harm, they also generate the conditions to undo themselves. Yet, the crises it produces also become new opportunities to continuously subject people and nature as cheapened and discardable workers, raw materials, or wastelands to make way for new "eco-friendly" and inclusive plantation productseverything from climate change crop insurance for those willing to pay the premiums to LGBTQ+ friendly and accredited real estate companies that contribute to urban gentrification and a growing housing crisis. The issue is not in the intention towards inclusivity, it is rather in the lack of attention to the political economy within which such inclusivity resides. The way that plantation ecology reduces diversity to monoculture -even as it depends on such diversity as the substrate to reproduce, sustain, and expand the deadening and dehumanizing logics of monoculture -is what Katherine McKittrick (2013: 5) calls an "oppression/resistance schema," giving the plantation an inbuilt capacity to maintain itself by feeding off its own contradictions. Yet, subjecting novel branding strategies to the replication of plantation ecology removes the "green" clothes from the metaphorical emperor and opens up possibilities for more fundamental ecological transformations. One way to appreciate the relational character of plantations is to better understand how and by whom they are unmade. This requires understanding how situated sites of liberation and freedom are established, even if ephemeral (Gilmore, 2017). In their review of Johnhenry Gonzales' (2019) Maroon Nation, Heath (2022) describes Gonzales' account of how autonomous peasant economies of formerly enslaved workers on sugar plantations in Haiti transformed the production relations of plantation ecology. This was the result of political struggle to reassert specific definitions of freedom as tied to place and the formation of class consciousness and solidarity that emerged out of the struggle and culminated in the Haitian Revolution. Such consciousness continued to foster resistance against efforts of post-Independence elites to reassert plantation discipline, including in the discursive use of so-called "free" labor. Heath describes how autonomy and self-sufficiency by maroon communities facilitated escape and re-capture into the plantation economy through the liminal reappropriation of the plantation itself, for a moment in time and space, reasserting West African cultural traditions with the territory.
Elsewhere, Glover and Stone (2018) describe how terraced landscapes of wet rice cultivation by the Ifugao in the Cordillera mountains of northern Luzon in the Philippines were the outcome of social, cultural, and spiritual resistance to colonial (Spanish) and imperial (American) attempts to reassert plantation ecology in the 19 th and 20 th centuries. A morphologically distinct landrace of rice (called tinawon) sustained and gave cultural meaning and purpose for the Ifugao in reclaiming their freedom from oppression. In these contexts, the notion of a plantation can no longer be totalized through uniformity, precision and replicability, dehumanization or value accumulation, but rather become sites of life generation premised on liberation from oppression and control. Tinawon rice is typically grown in only one harvest, combining deep spiritual connection and cultural meaning for the Ifugao, defining their political structure and economic relations, and the unique climatic, altitude, and ecological conditions of the Cordillera mountains (Ibid.).
The close relation (or indeed complicity) between both human and non-human resistance to plantation ecology that these historic examples provide opens new avenues of reflection in the face of ecological breakdown and so called "green" solutions. As we have thus far described, "greening" strategies have tended to entrench plantation ecology through the generation of new forms of value capture, including through novel forms of resource and labor devaluation to produce "green" goods and services. But how do affected workers on the plantation (both human and non) engage in marooning practices by taking advantage of increasing social and ecological dislocations that continuously emerge from these so-called solutions? How might abolition from the ruins of the plantation be fostered by weaving new kinds of relationality, class consciousness, and solidarity to build political power (Stoetzer, 2018)? How might the "green" plantation be resisted by fostering alternative ecologies of liberation and abolition? We now turn to two examples of "greening" interventions that reproduce plantation ecology, yet also involve actions of resistance and defiance. These examples are summarized in Table 1. In these examples, we refer to our own empirical research (both published and unpublished), drawn from interviews conducted between 2017-2019 (for compensatory afforestation in India) and 2021-2022 (for the RSPO). We subsequently conclude with some lessons that point towards abolition ecologies.
---
Journal of Political Ecology
Vol. 30, 2023 509 ---------------b) Oil palm smallholders, who form the backbone of oil palm growers can hardly obtain sustainability certification due to prohibitive costs and limited knowledge of certification benefits (Abazue et al. 2019).
---
Plantation ecology characteristic
---
Predictability
---------------b) Increasing public awareness of the greenwashing of ecological and social impacts from sustainability-certified palm-oil-based biofuels, casting suspicion over public manipulation. (Kukreti, 2022) Table 1: Features of two "greening" interventions that: a) embed or reinforce plantation ecology through their theory and implementation within the so-called "green" economy and b) generate contradictions that resist and redirect plantation ecology.
---
Compensatory
Green certification schemes: The Roundtable on 'Sustainable' Palm Oil (RSPO) and its undoing The rapid expansion of palm oil monocultures by transnational and local firms in Southeast Asia, Central and West Africa, and more recently, Latin America has caused the erasure of social-ecological histories along with mass-scale incorporation or displacement of local communities in forest biomes among the richest in terms of biodiversity (Pye, 2019;McCarthy and Cramb, 2009). Dehumanized, laboring bodies brought into plantation logics of the oil palm plantation have been widely devalued, and differentiated according to gender, nationality, ethnicity, class status and subjected to sustained forms of exploitation (Bissonnette, 2013;Li, 2011). In Colombia, in a context of civil war, oil palm plantations have provided the justification and financial means for military and paramilitary forces to enclose and secure large tracts of land, dispossessing thousands from their territorial and cultural autonomy to further the accumulation of lands for commodity production (Hurtado et al., 2017;Maher, 2015;Palacios, 2012;Potter, 2020). In response to growing scrutiny from the more visible aspects of ecological destruction across palm plantation regions (e.g. orangutan deaths, forest and peat soil fires and haze, massive contribution to climate disruption) as well as labor practices, "sustainable" palm oil through certification has become a salient public relations 'fix' for the industry (Pye, 2019).
The Roundtable on Sustainable Palm Oil (RSPO) is an initiative launched in 2004 by the WWF, the Malaysian Palm Oil Association (MPOA), Unilever, Migros (a retailing and refining chain), and AAK (a vegetable oil producer) with the goal of promoting the use and production of harm-free palm oil. The RSPO provides a platform for oil palm companies to engage in a supposedly third-party certification process that measures compliance with rules and standards approved by the consensus of its members, such as zero burning, herbicide use reduction and respect of labor regulations (Bain & Hatanaka, 2010). Its definition of sustainability relies on applying the right techniques and best practices such as selecting the best seeds and planting materials, technical fertilization based on soil surveys and nutritional assessments, adequate use of agrochemicals, and attention to drainage and water systems to increase the productivity, efficiency and profitability of RSPO members. 5Without changing the production system but using "green" as a license to both widen and deepen commodity frontiers, the RSPO offers a novel survival strategy for the dehumanizing logic of the plantation. Using the sustainability narrative, the RSPO has become the most prominent initiative to secure market shares for oil palm and assert large companies' social and environmental corporate responsibility and ESG portfolios. It effectively contributes to creating a "green" rent value within the broader political economy of intensive oil palm production and secures access to markets in places (like Western Europe) where consumers have higher purchasing power and environmental awareness, what has been termed 'ethical consumerism' (Pye, 2019: 220). The RSPO further legitimizes the idea that plantation agriculture can be regulated voluntarily by companies, if consumers are willing to pay more for an eco-certified product produced by the same plantation discipline that in fact never gets called into question. Opposition to plantation logics is thus effectively diffused through novel and flexible strategies that co-opt socio-environmental concerns and ultimately serve to extend the plantation.
The RSPO's principles and criteria of sustainability do not address the structural problems of the industry, including land conflicts and dispossessions, labor exploitation, human rights violations, and environmental degradation caused by the continuous expansion of the industry (Pichler, 2013). RSPO certifications and membership can also be used by palm oil companies to legitimize and consolidate illegal land dispossessions and accumulation, and greenwash histories of violence, discrimination and conflict as is illustrated in the case of specific palm oil companies in Colombia associated with paramilitary violence, forced land dispossessions, and death threats to land claimants and indigenous populations nearby plantations (Comisión Interesclesial de Justicia y Paz, 2015; EIA, 2015; Somo & Indepaz, 2015). The combined apparatus of government, private sector, organized crime, paramilitary groups, and scientific institutions at the helm of the green economy falsely equate savagely simplified plantation discipline to the kinds of ecological plurality they claim to be regenerating.
Oil palm production, however, exists outside the logic of the plantation. Despite profound disruptions brought by Western colonialism to the complex ecological relations developed by communities throughout history, small family farmers have remained the backbone of oil palm production in most parts of the world (RSPO, 2020). Even in Malaysia and Indonesia, where oil palm was initially introduced in the late 19th century as a plantation crop grown in centrally managed large-scale systems, it was rapidly taken up by hundreds of thousands of small family farmers and grown in diverse ecological systems (Bissonnette & De Koninck, 2017). In Northeast Brazil under Portuguese colonialism, African slaves brought with them oil palm seeds, which eventually enabled the emergence of a distinct Afro-Brazilian landscape. It produced the agroecological region now referred to as the Palm Oil Coast, Costa do Dênde, a clear marker of agency, cultural and territorial reappropriation (Watkins, 2015). Despite the horrifying logic of the plantation, the crop itself and the human relations formed around it can never be fully reduced to a predefined outcome premised on a factory model logic of production.
Where the "green" plantation logic manifested through RSPO shows limits is precisely in the certification of small family producers or smallholders. The diversity and complexity of tenure arrangements, cultivation practices and access to information (Jelsma et al., 2017) renders small scale production less visible to the uniformity of "greening" practices. This is not to say that small-scale oil palm farmers fall outside power relations of plantation logics, which they indeed may aspire to in the hopes of generating profit as property owners. However, because they are highly heterogeneous and remain embedded within more embodied relations to land and labor, they actively shape ecological processes that fundamentally differ from that of monoculture.
---
"Greening" development in India through compensatory afforestation
The reproduction of plantation logics within India's green economy is a growing concern. Stories of resistance from plantation landscapes in the state of Odisha point, however, to insight in why and how these logics fail or get undone. In India, compensatory afforestation (CA) requires public and private agencies that deforest for roadbuilding, mining, or other development projects to plant an 'equivalent' forest elsewhere. While ostensibly a tree-planting project, CA is at its core a tree-cutting project, since every forest being cut is behind each (largely monoculture) plantation that exists through this program.
In Keonjhar District, Odisha, monoculture tree plantations have been imposed on community lands for decades, often under the guise of "podu prevention" (Panda, 1999). Podu refers to a system of agroforestry that is often known as rotational agriculture, shifting cultivation, or swidden cultivation. Practitioners move from site to site, leaving fallows to regenerate and clearing a new patch for cultivation. Like many previous afforestation programs, CA site plans reveal that forest officers intentionally select podu sites for plantation, describing them as "podu-ravaged", "subjected to podu cultivation" or "conspicuously cultivated" (DFO, 2014). Aware that this will drive conflict and resistance from villagers, who cultivate or forage about half of their food basket in the forest (Valencia, 2019), site plans often include strategies to ensure "good humor" among villagers including celebrations that will inspire them to "protect the plantation" (DFO, 2014). Yet because communities are acutely aware that the spatial and ecological imposition of plantations on podu lands reflects a broader political project-threatening their livelihoods-they reject the counterintuitive assumptions of CA, including that Adivasi (i.e. Indigenous) customary rights, livelihoods, and cultures are obsolete; that monoculture plantations are equivalent to forests; and that plantation protection leads to "benefits."
The state's pursuit of "efficiency" is reflected in its strong preference for teak (Tectona grandis). Teak is a favored species for plantation forestry given its quick growth, durability and economic value. But for communities in these areas, teak plantations are "utterly useless" (Valencia, 2019) in comparison to forests and regenerating fallows which offer fuelwood, fruits, roots, tubers, leafy greens, seeds, fodder, and other forest products. Resistance to teak has an important historical legacy in central India. The Jangal Katai Andolan (Forest-Cutting Movement) in the 1970s organized Indigenous communities to burn plantations, destroy saplings, and demolish forest department infrastructure (Sen, 2018, p. 195). In Keonjhar, Indigenous organizing against plantations initially focussed largely on species selection, with demands not to end plantations but to recognize communities' decision-making role in picking species that benefit them. Today, forest agencies claim to be undertaking a more "holistic" approach to plantations, including attention to polyculture. However, ground-level evaluations of CA plantations shows that, where plantations are indeed undertaken at all (e.g. Kukreti, 2021), teak remains the mainstay.
The narrative that plantations depict efficiency is belied by the fact that plantations rarely survive (Rana et al., 2022). In Keonjhar, the plantation legacy is mired with failures spanning from the era of social forestry (e.g. Panda, 1999) to their new linkages with the green economy (Valencia, 2019). Ground truthing of CA plantation data has revealed that CA saplings may be planted, and a plantation may exist in principle, but within a few years sites are often reverted to shifting cultivation and replanted with traditional crops (Valencia, 2021). In 2013, the Comptroller and Auditor General of India released a report including evidence of unacceptable plantation survival rates, unmet offset objectives, and rampant financial mismanagement (MoEFCC, 2013). Hardline conservationists and retired forest officers challenged the program on similar lines, leading to a new law -the Compensatory Afforestation Fund Act, 2016. 6 Taken together, the veneer of efficiency crumbles.
The dimensions of calculability and predictability particularly enrich an analysis of how CA connects with the broader green economy. As per India's Forest Conservation Act, to achieve "equivalence" between forests and plantations, deforesters must fund plantations that average 1,000 trees per hectare and must pay into a fund that approximates the net present economic value of foregone ecosystem services (e.g. biodiversity protection, carbon, water recharge) associated with deforestation spread across a 50-year period, to account for lost regeneration costs (Kohli et al., 2011). However, advocacy to push state forest agencies away from highly dense "block plantations" and towards "assisted natural regeneration" has created a perverse outcome. For instance, site maps for upcoming plantations in Thuamul Rampur, Odisha reveal that rather than simply targeting shifting cultivation fallows for block plantations at 1,600 trees per hectare, every square centimeter of village commons will be enrolled in plantation projects at a lower density (Valencia, 2019). Given that neighboring villages are often affected, this plan will convert interconnected, Indigenous shifting cultivation landscapes into archipelagos of homesteads within seas of fenced-off "green" state property.
Compensatory afforestation plantations are unique within India's massive restoration portfolio as they have the power to delay forest clearances for expensive extractive projects. Predictability of plantation site availability, suitability, and execution is therefore key. One evidence of this is that in Odisha, the state land bank identifies lands for CA as lands for investment, thus increasing the risk of land dispossession. The political economy of land demands within which CA is embedded also creates a predictable procedural space. A site plan can simultaneously employ specific turns of phrase, including the assertion that desired lands for CA are "conspicuously cultivated" or "free of encroachment and encumbrance" (DFO, 2014). These phrases sanitize the lived experiences of people dispossessed by the plantation projects and conceal failures in due process with no consequence. Predictability is a core component of compensatory afforestation and restoration logics more broadly because of the calculated commitments around tree planting that India has made to which plantations must fulfill. India's global commitment to afforestationat 26 million hectares -is second only to China. India has long committed to increase from present forest cover of 21% to 33% (for more on these rather arbitrary numbers and definitions, see Davis & Robbins, 2019).
The resulting power struggle between communities and the State invokes the fourth dimension of control. Reflecting on the planned proliferation of plantations across podu patches in her village, one senior woman asserted: "We will not allow them. If they do plantation everywhere, what land will be left? How will we survive?" (Valencia, 2019). While plantation policies and plans occur at the higher levels of the forest bureaucracy, exertion of control is often up to the lower-level rangers, guards, and watchmen. Strategies such as hiring labor from outside villages (or dominant communities within the villages), negotiating 'deals', and manufacturing consent through illegitimate local institutions are employed to ensure that saplings are planted, as a bare minimum commitment, and to justify calculated plantation quotas and statistics on forest coverage (Choudhury & Aga, 2019;Fleischman, 2014;Gerber, 2011) Communities may take these in stride, with an ultimate plan of reclaiming the land from the scrawny saplings to plant millets (Cenchrus americanus and others) or niger seed (Guizotia abyssinica) instead. But what binds the plantation ecology of CA to the green economy is the equivalence that each monoculture planted justifies for deforestation elsewhere. Here, a unique contradiction emerges. CA plantations are telecoupled to deforestation. They exist to mitigate harm, while extending harm and control by placing forests in the hands of many into the hands of the State forest agencies. While attempts continue to gobble up grassland areas and seeming "wasteland", from the eyes of the government, for their conversion to forested monoculture plantations, efforts to reclaim land back in the hands of land users continues to follow suit. Meanwhile, plantations are spreading to distant locations, increasing the State's grip on territories in other jurisdictions.
The social impacts of CA reflect the scale at which people most vulnerable to the impacts of climate change and ecological breakdown will also be most threatened by the green economy solutions supposedly aimed at addressing impacts. It also reveals the fragility of the plantation: for all the efforts to make plantations efficient, calculable, predictable and controlled, communities assert that it doesn't take much to pull up a weak teak sapling and plant millets or niger seed instead (Valencia, 2019). Modifications to the Forest Conservation Act in 2023 will make it easier to divert forests to expedite developments in the name of being in "national interest and concerning national security," thus exonerating the need for forest compensation at all in some cases. These security-related infrastructures include, among others, projects for planned commercial wildlife safaris, ecotourism projects, public works in so-called "left wing extremism" areas and in any location within 100 km of an international border with India (Sharma, 2023). It is also expected that forest plantations can be designed to maximize carbon sequestration for tradeable carbon credits and to render development projects carbon neutral (Ibid.).
---
Never quite a conclusion, towards abolition ecologies
Describing plantation ecology is not meant to showcase how "green" is being done wrong and how it can be done right. That would be too precise, too replicable, too rational, even if it was indeed possible. Moreover, the "greening" of plantation ecology is not limited to specific interventions like compensatory schemes or sustainability certifications but may also apply to sweeping economic transition programs like the Green New Deals 7 of wealthy, industrialized nations (Ajl, 2021) that do not pay attention to the logics we have described here. Ecological solutions cannot come from plantation ecology -the same discipline and design that has only sharpened the knife blade of ecological breakdown and inequality, precipitating the loss of sociocultural imaginaries and capacities to intervene and generate alternatives. Liberation from the plantation requires dismantling the plantation rationalizer in our collective minds. This means policy un-friendly recommendations amidst an ever-tightening State/corporate nexus that regenerate a praxis of worlds (plural) in common, crucially grounded in desires for freedom from oppression and dehumanization. To us, the demands for social and environmental justice that defy the imposition of plantation discipline by twinned state and private sector interests are ecologizing practices, meaning that they reanimate thought and being in ways that stimulate conditions for the proliferation of alternative socio-ecological relations. These spaces reflect relationships to the land that have historically regenerated conditions for living out of both desire and survival (McKittrick, 2013).
A plethora of questions emerges as to how these forms of existence terraform landscapes of hope against hope, of people and non-people alike, temporally through situated encounters and geographically across territories. Such ecologies are not predictable, efficient, calculable, controllable nor are they replicable, but rather reflect the unlikely kinships of place-making that emerge amidst the ruins of the plantation (Stoetzer, 2018). They do not deepen or widen plantation commodity frontiers; yet could just as easily be essentialized as new desired endpoints and themselves driven into new production systems that placate any attempt to reroute 7 Compromises to labor that underpin welfare-based social democracies in wealthy industrialized countries of the North depend fundamentally on the pillaging of dehumanized labour and cheap (renewable) energy and material extraction in the Third World (Ajl, 2021). The proposed Green New Deals of Ed Markey/Alexandra Ocasio-Cortez in the US and the European Green Deal risk being strategically imbricated within the logics of plantation ecology as described.
the template of plantation ecology. Put differently, they could all too easily be romanticized into new equitable, politically-correct, diverse, and inclusive plantations -that fail to address uneven dimensions of value and knowledge accumulation.
Only profound solidarity across the social fragmentations embedded in plantations can overcome the tendency to reproduce plantation logic. This requires internationalist and intersectional solidarity movements that encompass agrarian and fisherfolk demands for autonomy over food production systems, Indigenous struggles for territorial sovereignty, and demands for decent working conditions that reflect lived experiences across gender, race, and immigration status. It is therefore imperative that so-called "greening" solutions be scrutinized for the tyrannical interests of the 1% that they ultimately serve. As we have argued in this piece, dehumanization and ecological simplification are not merely technical issues of mal-distribution or improper recognition within plantation discipline, but are fundamental conditions of its existence and expansion (Ajl, 2021;Coulthard, 2014). Reclaiming autonomy from the plantation has inspired decolonial and abolitionist thinkers from the Black radical tradition (e.g. Angela Davis, bell hooks, Kimberly Crenshaw, Saidiya Hartman, Clyde Woods, and Ruth Wilson Gilmore); Indigenous ecologists like Potawatomi scholar Kyle Powys Whyte, Yellowknives Dene scholar Glen Coulthard, Michi Saagiig Nishnaabeg scholar Leanne Betasamosake Simpson, and Unangax scholar Eve Tuck; anti-imperialist and non-Eurocentric decolonial scholars like Liberian activist and academic Robtel Neajai Pailey, Cameroonian historian Achille Mbembe, Peruvian sociologist Aníbal Quijano, and Bolivian sociologist and historian Silvia Rivera Cusicanqui, as well as anticaste philosophers and contemporary thinkers like Jyotirao Phule, Babasaheb Ambedkar, E.V. Ramaswamy Periyar, Suraj Yengde, and Kancha Ilaiah among many others.
The process and material outcomes of attaining freedom from the plantation is what Heynen and Ybarra (2021) refer to as abolition ecologies, characterized as embodied relationships between people and territory imbricated within a struggle for liberation from state-sanctioned violence, criminalization, and dispossession. The movement to "Defend the Atlanta Forest", which aims to halt a planned police training facility whose construction threatens the safety and environment of neighboring Black communities and is an act of ecocide in an era of ecological breakdown and on the sacred stolen territory of the Muscogee Creek people, is an abolition ecology in the making (Bernes, 2023). It intertwines the efforts of prison abolitionists, dreamers of Black liberation from the carceral state and legacies of plantation oppression, Indigenous activists and environmentalists alike through a 'movement of movements.' Together, these actors root themselves with the plants and animals of the forest through a myriad set of human and non-human relations premised on social and ecological justice.
Abolition ecologies are the biophysical and socio-spatial relations that shape and are shaped by legacies of resistance from (neo)colonial oppression rooted in situated categorizations of dehumanization (e.g. antiblack, anti-Dalit) and ecocide (Sultana, 2022). An abolition ecology means dismantling the infrastructure of plantation ecology and putting an end to the possibility that plantation "irrationalities", conceived as economic externalities, could ever be enrolled back into a more diverse and inclusive plantation. An avenue of necessary inquiry resides in how such dismantling ought to take place, without falling prey to co-optation. Does care for social and environmental justice ultimately require blowing up pipelines, referring to the title of Andreas Malm's 2021 book? It may be, as Grubačić and O'Hearn (2016) argue, that abolition ecologies are deeply liminal as the example of maroon ecologies in Haiti illustrate. This means that they may not "exist" as such, but are immanent in resistance to being named, mapped, or fully analyzed (Harney & Moten, 2013). This immanence of resistance is itself the relationality reflecting what ecological complexity means and to which care-ful attention is needed in doing away with plantation discipline, yet often with little guarantees.
The "Gesturing Towards Decolonial Futures" collective have recently reflected on ways to ensure that efforts made towards decolonization are not re-routed into the same desires and entitlements that lead to colonial practice, rendering decolonization a weaponized buzzword that serves colonial interests (Stein et al., 2021, Tuck andYang, 2021). Part of this responsibility lies in affective affirmation of "staying with the trouble" (e.g. Haraway, 2015) without being content with residing in a space of mere intellectual critique of coloniality. This does not exonerate the "reproduction of modern/colonial desires and habits of being" (p.10). The collective identifies "circularities" or pitfalls that ensnare engagement with decolonization back into colonial practice, highlighting how they positions themselves aspirationally within what they call "the house modernity built" to better contextualize the pathways that engagement with decolonization may take. In each case, they proceed by walking readers through the mistake-ridden journey of trying better, with attention to humility, curiosity, attention to difference, self-complicity and long-haul discomfort with the trouble we find ourselves in. By keeping an eye to the ways resistance and response strategies to plantation logics fold back into what they seek to escape from, it becomes possible to hold out a "horizon of possibility" without cynically writing off the recurring inevitability of the plantation. Part of this practice involves disinvesting in the unethical and deadening trajectory of the plantation but without arrogance as to the "correct ways" of fashioning alternatives. This involves taking the lead of abolitionist and anti-colonial struggles as well as through individual and societal commitment to "hospicing" the harmful everyday practices and habits of being that consciously and unconsciously reproduce plantations (Stein et al., 2021).
Undoing the "green" plantation is an undertaking in taking ecology seriously, and by that we mean opening the deeply political horizon of how harmful habits of thinking and being are reproduced in society and in ourselves. | 74,135 | 1,736 |
6223e2b3c02dc7fa2bdf481353dc449e81c62237 | Case Study on Obstacles to the Social Integration Process of Young People of Roma Origin | 2,023 | [
"JournalArticle"
] | There are numerous obstacles to the advancement of Roma young people coming from disadvantaged social environments. Among these, the phenomenon that can be described by the expression köztes kitettség [verbatim: intermediate exposure] stands out. Social integration is an integration/assimilation practice complying with majority norms, which also means moving away from the values of one's own local environment. According to the experience gained from research conducted on this topic, there are a lot of Roma young people who are trapped between two "societies" -their own sociocultural environment and the majority environment -and, consequently, find themselves in a special situation. The aim of this study is to shed light on the general context and the social significance of the phenomenon described above through recording field experiences and applying case analyses. | Introduction
The basic question addressed by this research project is if it is possible to become an intellectual without conflict and identity crisis, i.e., how a young person of Roma origin can break out of the constraints of disadvantage amid contemporary circumstances. 1 Research on the Roma in Hungary comprises several approaches to the topic of integration and, primarily, the integration of peripheral groups, from a number of different perspectives. 2 Among other efforts, investigations are being conducted on ethnic coexistence situations, 3 but the issue of the Roma language and the shift between languages 4 have also been explored even more intensively, and several social researchers have examined the topic of ethnic mixed marriages, too. 5 If we look at the situation of Roma groups living in Hungary, we can find numerous research efforts that explore the relationship between hosting communities (in our case, the majority Hungarian society) and immigrating ones (in this case, groups of Gypsies). One of the frequently analyzed topics of assimilation research in Hungary is the development of cultural relations between Hungarians and Gypsies, which is usually analyzed within the conceptual framework of assimilation, integration, cultural adaptation and dissimilation.
This study deviates from the focal points listed above and examines the circumstances of Roma youth launching into intellectual careers. By presenting a case example, it aims to illustrate yet another important aspect of coexistence. One the one hand, the case study is a suitable tool to highlight the topicality of the issue, while on the other hand, it also proves that the life situation it presents cannot be solved just by involving those concerned and affected (living in it) alone.
Specifically, this study offers an analysis of the life path of a young Vlach Roma couple with a college degree, which reveals the complex mechanism of influence of the social, cultural and economic conditions that maintain what I call "intermediate exposure." Its chief objectives include identifying the problem and outlining further investigation possibilities of the related topic.
2 József Kotics has conducted numerous field research projects in Hungary and in regions inhabited by Hungarians beyond the border. For details of the theoretical-methodological approach to and research findings on Hungarian-Roma coexistence, see Kotics 2020. Gábor Biczó proposes the introduction of the concept of "ressentiment" to help interpret Hungarian-Roma coexistence situations. In his work, this concept, as an analysis of the culture of resentment can help understand what processes take place in the affected minority communities. Cf. Biczó 2022. Norbert Tóth investigated the impact of segregation and school segregation on the social empowerment of those affected in the Vlach Roma community of a small settlement, examining among other features the indicators of further education and school performance. Cf. Tóth 2019. 3 There are several comprehensive analyses available on this topic. For further details, see Kovács et al. 2013;Biczó et al. 2022. 4 See, for example, Bartha 1999; Nagygyőryné 2018. 5 For further details, see Tóth-Vékás 2008; Gyurgyík 2003. Based on the research findings so far, it can be concluded that young people of Roma origin who participate in higher education while coming from a disadvantaged position in terms of their family sociocultural background and then try to make a living as intellectuals after graduation, find themselves in an existentially, psychologically and socially unstable situation between the majority society and their own immediate community, which might be dubbed the state of "intermediate exposure."6
---
Research background and circumstances
In addition to the work done for a period of ten years in a Roma college for advanced studies, on which the present study is based, anthropological field research conducted primarily in Roma communities residing in disadvantaged areas of North-Eastern Hungary has provided specific information for this analysis. Apart from community studies, research on Roma intellectuals has also received a basis from exploring the role of social individuals in local communities. At the level of the social role of the individual, cultural shift processes in local communities, such as changes in the value system, can be properly identified. In the light of the changes in the value system of local societies determining the coexistence of the majority and the minority, the following question of practical significance can be pretty well examined: How is it possible to resolve the stereotypes that dominate the Hungarian-Roma antagonism that can often be observed in the social space? Furthermore, I have sought to understand during the course of my research what external factors sustain and operate the oppositional structure of ethnic coexistence. 7 The concept of "intermediate exposure" makes it possible to interpret what it means to be caught between two "worlds" at the mercy of the system of stereotypes that dominate the relationship between majorities and minorities.
Becoming an intellectual of Roma origin in Hungary is a complicated process, and it cannot be simply described as the graduation process on the basis of performance in higher education. Young people, most of whom are disadvantaged because of their backgrounds, have to face mobility challenges dur-ing the years they spend in college, which automatically presuppose external supportive institutional conditions. 8 The most important component of this kind of support is the network of Roma colleges for advanced studies (Roma Szakkollégiumok) in Hungary, all members of which operate as genuine integrated institutions, where young people of either Hungarian or Roma origin form communities together. The rules of operation of this system, which are applied as prescribed, prevent these colleges from forming segregated inclusions in higher education. 9 Besides supporting the chances of success in higher education, Roma colleges for advanced studies also take care of the important task of strengthening Roma identity. It is a common experience that young people from Roma families face an identity crisis in higher-education institutional settings, which is often accompanied by an identity conflict. Researching the life paths of successful persons of Roma origin, Margit Feischmidt identified the cause of the identity conflict as follows: "in most cases, the intention to assimilate and the majority rejection encoded in institutional discrimination and/or everyday racism" 10 may be behind the phenomenon. Young Roma intellectuals drifting into an "intermediate exposure" situation encounter institutional discrimination and identity conflicts primarily not during their years at college or university, but later on in the labour market. Experience shows that the Roma college for advanced studies system is not yet fully prepared for the challenges facing employees, since academic success alone -as experience so far has shown -does not guarantee success in life outside the "institution."
---
The circle of those concerned
Determining the number of gypsies living in Hungary is a difficult task in several respects and, even today, it is primarily the issue of identification that 8 An important experience related to describing the problem of "intermediate exposure"
was that, as the director of Balázs Lippai Roma College for Advanced Studies (2016-2018), I developed a "helping-supporting" work method (Tesz-Vesz-Koli), which can also be applied to disadvantaged Roma university students. It basically helps students to develop their individual skills and abilities and to orient themselves in the higher-education environment by building on their individual aptitudes. (For an introduction to the working method, see Szabó 2016.) 9 Find more details on the integrative efficiency of Roma colleges for advanced studies in Biczó 2021. 10 Feischmidt 2008. proves to be a challenge for social researchers. If we look at the figures of the census conducted every ten years, we can see that, in 2011, as many as 315,583 people declared themselves to be of Roma nationality. 11 Another important aspect of the data that can be gleaned from the survey is that only ~1% of Roma people have a higher-education degree.
A different methodological approach was applied by the research group of the University of Debrecen in their project conducted between 2010 and 2013, in which the territorial location and distribution of the Roma living in Hungary was primarily examined. Using the method of expert estimation and external classification, they estimated the number of people of Roma origin to be about 876,000.12 Pic. Nr. 1: Settlements where students of Roma colleges for advanced studies come from in 2020.
Biczó-Szabó 2020: 34.
Another nationwide survey was conducted in 2020, when Gábor Biczó and the author of this study conducted a comprehensive analysis of the members of the 11 Roma colleges for advanced studies operating in Hungary. 13 It revealed that Roma students in colleges for advanced studies are present in higher education in all 15 fields of study and in a total of 122 different majors. We learned from the study that the geographical recruitment environment of students was also fairly diverse, with those coming to higher education representing a total of 204 different settlements.
It can be clearly seen in the map above that the vast majority of the members of colleges for advanced studies at that time came from parts of Hungary that are most densely populated by Romas (North-Eastern Hungary and South Transdanubia) according to data collected by the Pénzes-Tátrai-Pásztor research group during the period under review. Furthermore, it can also be seen that, within the distribution of the residential settlements of college for advanced studies members according to legal status, those coming from small towns and villages are in a higher proportion than those coming from metropolitan or urban areas. Thus, the circumstances of the disadvantaged source environment fundamentally determine the initial state the compensation for which decisively shapes the development of students' college/university years.
For most of them, university or college life means a significant change in relation to where they come from. At the same time, based on the experience gained from the follow-up of Roma college-for-advanced-studies graduates, the real challenge for them begins after graduation. They are faced with a choice between four options:
1) One option is to return to their original living environment and try to make a living locally in their profession. 2) Another solution is to return to their original living environment and, in the absence of a job opportunity matching their profession, find employment in another sphere; typically in jobs that do not require a college degree. 3) They may also decide to look for a job related to their profession, but in a larger city or in the capital, even if it is at a considerable distance from their place of residence. 4) As a final variation, they can continue their studies in post-graduate education, taking advantage of the "protective system" of the university and the college for advanced studies sphere.
The above categories represent a valid analytical framework for practically all young Roma intellectuals -students who have joined the Roma college-foradvanced-studies network. After completing their studies, Roma young people who have just graduated do not always follow the path they had planned beforehand, but rather the one that "opens up" to them, so to speak. After graduation, their career depends on the openness of the immediate majority environment to integration and the specificities of the personal living environment.
Intermediate exposure: a case study R. K. grew up in a traditional Vlach Gypsy family in the settlement Hodász, located in one of the most disadvantaged areas of North-Eastern Hungary. According to tradition, her parents talked to her in the Roma language, and she learned Hungarian in kindergarten. Her parents tried to protect her from all new influences, which meant she would not be allowed to go anywhere alone. With the exception of school trips, R.K. did not leave her residential area, since according to gypsy culture, young girls were not allowed into foreign environments.
"I have loved travelling since I was a child. That's what I always fought for with my father that I would be going still. That there is no such thing, that I am not going. Let's say for a hike, or rather, I say this, which I really wanted. That's all I wanted to do, to go on trips, to see the world, to get to know the cultures. For me, that was what I really wanted." 14 The internal conflict with cultural traditions was thus evident at a very young age.
"Because when I was little, I missed them, I didn't ride trains, I only got on a train for the first time when I was 18 or 19. I didn't take the bus, only when I was in high school, and I really liked to go and live, because my dad didn't really give in; he was scared, and this desire only grew stronger and, when I could, I tried to do everything." 15 During her high school years, R.K. saw an example of some young people living in similar sociocultural circumstances choosing further education, but this was not a natural alternative for her.
"And when we went to grammar school, I didn't care about the fact that I would go to further education now, but to have my high school diploma, and then what has to come will come. And then my sister and our cousins, and then they said they were going to college. But I didn't really care about that either; let them do whatever they want, and then something will happen to me. I didn't care much about it; I always tried to have it with the present." 16 In R.K.'s family, her brother and cousins, with the support of their high school teachers, decided to go on with their education. However, through this move, they met with complete resistance from their family environment.
---
"And they weren't allowed into dormitories first; they would have been allowed into school.
Due to tradition, it is not very customary to let girls into dormitories and into the world so much.
[…] But in the end, my sister wanted it so badly that they had to, they had to agree." 17 R.K. was able to get into higher education because one of her sisters had already started her university studies a year before, following her own path and, therefore, there was an opportunity for them to move to the same dormitory. After successfully graduating from high school, R.K. was admitted to the University of Debrecen, where she started her studies in infant and early childhood education in 2015. Going to university opened up a new path for her: a new environment and new challenges in everyday life. At the same time, membership in the college for advanced studies and the dormitory companions also meant security for her, as she had a large number of acquaintances and relatives in the institution from her settlement of origin. However, her biggest support and supporter was R.M., with whom they had already entered into a relationship during the training.
"And then it was in 2017 that we eloped in the traditional way. The way it happens is that we were still in the dormitory, and then we went to Budapest, and then I phoned my mother from there that I was already with B, that we had run away. And then we went home; we were getting ready at home, and then we discussed when the two families would take me home, because I can't go home alone, they can't come to me either, but until this family takes me home, together with my husband, we won't really be able to meet." 19 Despite the majority environment as well as the newly experienced system of customs and norms, the family tradition proved to be strong, so they decided to marry according to Gypsy customs. General experience shows that the majority society is unable to make sense of the tradition-following Roma marriage customs and is less accepting of the practice of "elopement". This is primarily due to the fact that they do not have sufficient information about the Gypsy customs, so eloping as a form of marriage usually only strengthens negative prejudices. 18 The situation has been handled with surprising flexibility on the part of her family. The father, defying the majority stereotype that education has no value for Gypsies, made a single request:
"we can do whatever we want, but we should get the degree, that's all he wants done. So they have already understood and accepted how much a degree or a profession is needed for a young person, be that a Roma or non-Roma youth. That was his request. And I was already a woman then, and even then, it was important." 20 R.K. then successfully graduated at the same time as her husband, who earned a vocational training qualification in higher education. He finished his studies with very good results, and always completed his practical classes receiving unanimous praise. After graduation, they planned to live and work as an intellectual couple according to the values expected to be shared by the majority middle class, so she and her husband moved into an apartment in the city where R.K. did her internship. Their goal was to get a job as soon as possible. They planned everything consciously; they wanted to make ends meet independently and without family support. "And then we didn't move home but tried to find a job there. To find a job, it was very difficult fresh out of college, and we were unemployed for a year." 21 The reason for the unsuccessful job hunt and repeated rejections was always R.K.'s ethnic background. Its external anthropological features are rather telling at first glance; everyone classifies her right away as belonging to a specific ethnic community of origin. Besides the efforts to find a job, she also managed to join a competency development training course, which indirectly contributed to her successful employment later.
"For a year in the same dwelling, feeling aimless and all these other things and, then and there, I developed quite nicely; I felt this about myself, and then when I completed this little training, in August, I was admitted to the nursery on Görgey Street in Debrecen as an early childhood educator, and I have been working there ever since." At work, the initial fears and inhibitions soon disappeared, as she quickly gained acceptance both among children and parents, as well as towards her colleagues due to her professional competence and kind, helpful attitude.
An important part in the development of their seemingly stable situation was played by the immediate social environment that surrounded them. However, the COVID-19 pandemic suddenly created unexpected circumstances.
"Our lives changed a lot because, due to the virus, we just packed our stuff and moved home on an impulse. […] My husband opened this second-hand clothes store on June 1, 2020, which went very well, and it was also actually convenient. I only did the cleaning part, which I did, whereas my mother-in-law, she is a shop assistant; she has that qualification, and then she worked in it, she was the employee. And then everything went quite well but, for some reason, I felt so out of place, and I guess my partner felt the same way. And then our lives took a big turn, […] we didn't feel at home here, so we moved back and we rented an apartment in Debrecen." 23 Since R.K. acquired a lot of new life experiences during the years at university by taking part in several trips in Hungary and abroad, meeting quite a few new people, seeing and experiencing new life situations from up close, she could no longer imagine her life only along the traditional Gypsy female role expectations.
Thus, the feeling of "intermediateness", of belonging neither here nor there, became a constant part of her life.
"We didn't stay there long, as it turned out that I was expecting a baby, then we moved home again, and then we realized that this house was a refuge for us. And then, from that point onwards, we started to renovate this house, to care a little bit about it. We forged new goals that bound us here to stay in Hodász." 24 tréninget, akkor augusztusban Debrecenben, ott felvettek a Görgey utcai bölcsődébe, mint kisgyermeknevelő, és azóta is ott dolgozom.] 23 They silicate-block and brick house in the segregated neighborhood of Hodász was inherited by them from her husband's grandparents. This predominantly Roma environment and the fundamental social, cultural and economic differences between the village and the "big" city required a high degree of adaptation efforts from the young couple. Despite this, their willingness to help their own community, their readiness to do something, along with their professional commitment was well demonstrated in the fact that R.K. and her husband established an association in 2017 with the aim of supporting disadvantaged young people in Hodász in order to help them catch up. Through organizing summer camps, distributing donations and hosting various professional events and public lectures, they tried to promote the strengthening of Roma identity and breaking out of disadvantages in the lives of Roma and non-Roma young people in need.
"On top of all, we were renovating a house. So, it was very stressful for both of us but, even though we were building, despite all these goals and dreams that we had and partially realized, Debrecen, for some reason, it always remained the true desire of our hearts, and we moved again. That time, already there, to Civis Street. By the way, I planned that for myself at the time, and M. also said that it would be like this. We would then take it from here when the little one would be born, and then I would go back to work from there. Well, but it didn't happen that way because, in January 2021, M. became very ill. He was also involved in organizing education, and that was also our livelihood. And because of the virus and illness, he couldn't do this job, and so we couldn't pay the rent of the apartment we lived in, even though we loved it very much. We had a great time there: there was the post office, the convenience store, everything. Just as much company for our needs, which was enough. The colleagues were there, in that part and in the house next door, and there it was very good. But still we had to move home. Rather, I forced this, because I already saw that the following month would be rather tight, and then we did not wait but moved home." The feeling of being vulnerable to circumstances gradually became their dominant life experience.
"Here in Hodász, regardless of the fact that we have this house here, and we keep building it and making it pretty, we do it, yet there will never be a better workplace for us here. Therefore, we have no reason to stay and to live here. Family is the only thing that binds us to this place, but I think, wherever we go, we will always come back to visit home." 26 The stalling of mobility filled their lives with constant conflict. After the events of starting an intellectual career, a successful departure and mobility, the turn of getting forced back into their original environment became a reality circumstance determining their situation of "intermediate exposure".
"There are a lot of things happening in my life that pull me back, […] I think the fact that I can't open up at home because the role is different also plays a big part in this. Like, let's say elsewhere, in the rented apartment. It's completely different, even though no one tells me; it's just that it's supposed or not supposed to be done here at home, and there is no such thing there. Maybe that's why my life here is uncomfortable. I think it's because it's very, very difficult for me to live here."27 At this point, her Roma identity and the traditional value system brought along from her original community created an obstacle in R.K.'s self-interpretation that she could not reconcile with her changed role in life and the stalling of her upward mobility. This situation usually takes the form of a permanent conflict of roles.
"I don't know why that, here in Hodász, as if our horizons were narrowing and our opportunities were also narrowing. And maybe that's why I, or maybe that's why I don't feel so good at home. There wouldn't be any problems anyway; there just aren't many, so many, no, I don't see my life as bright and beautiful here at home as elsewhere. These are mainly settlements like Hajdúböszörmény and Debrecen, Debrecen, where people can really reach their full potential and live their lives as they want. I feel good regardless of finances. I'll see if it turns out somehow. We would like to no longer live in an apartment, but in a house that is our own. Whatever we don't have to pay for monthly, and we can sit outside in the summer and stuff like that. Now we would like to flee, we would go to Debrecen but, for the time being, there is no prospect yet." 28
---
Lessons from an in-depth interview
According to the in-depth interview with R.K., she felt that her life was unhappy at the time of the recording. The question is how to analyze this phenomenon within the conceptual framework of "intermediate exposure", as a general phenomenon determining the social mobility of young Roma intellectuals. The majority middle-class value expectations portray the trajectory of the average intellectual's career as a schematic process of events: successful university admission after graduation from high school, followed by a successful completion of requirements in college, graduation, employment, tax payment and establishing a family. The compulsion to conform to normative expectations and the role expectations adapted to them are inherent in living as an intellectual.
The subject chosen for our analysis, R.K., comes from a traditionalist Vlach Gypsy native language environment. Following a successful high school graduation, she went to the University of Debrecen, where she became a certified infant and early childhood educator. She managed to find a job in her line of profession, got married at college, started a family, and is currently on GYES [maternity leave in Hungary].
In her story, the conventional order of the stages of her career is different in the respect that marriage and having children did not allow her to stabilize in her role as an employee just after graduation. As an important feature of background circumstances, her insufficient financial background made it im- possible for her to pay rent and maintain living standards in a city and, at the same time, it was a fundamental reason for her reintegration into her original segregated environment. Beyond all that, however, the question is how the "intermediate exposure" applies in the light of the fundamentally norm-following trajectory. Also, why and how does the integration process get stuck in a kind of permanent transition?
The case of R.K. provides us with an opportunity to interpret the nature of "intermediate exposure" and its long-term survival as well as to address the question of how it constitutes an obstacle to integration into mainstream society.
R.K. took the career path of Roma intellectuals and found herself in a liminal situation. Both in her self-definition and in the qualification of her environment, it is often stated that she is "too Hungarian for Gypsies and too Roma for Hungarians". Following this approach, we may reckon that it depends solely on her personal choice whether she remains a Roma person or becomes a Hungarian one by assimilating. However, the fact is that, whatever is decisive here is not so much her own decisions but rather her circumstances. Do young Roma intellectuals, exposed to the state of "intermediate exposure", really have a choice, or is it their external environment that forces them to make certain decisions? On the one hand, R.K. is a "victim" of the social expectations of her own cultural community whenever she is in her original environment. The traditional Vlach Gypsy customs, as it can be clearly seen from the previous briefly outlined compilation, represent an important system of values and norms for her, as well as a point of orientation and a cohesive community. Family customs in R.K.'s life are not present as a choice, since she was born into the culture and there is simply no question asked concerning her transcending them in any way whatsoever. In fact, she perceives and understands her own situation as a committed follower of traditions.
By contrast, the system of values and norms of the majority society has acted as unavoidable factors shaping her career, her mobility and her chance of becoming an intellectual. This latter has become an inalienable part of her personality, especially through the patterns she has followed for so many years in educational institutions. R.K. does not intend to completely break up with her original community, but the way of life and lifestyle offered by the opportunities inherent in her intellectual career and what R.K. indeed experienced after taking up employment do act as a kind of counterpoint to her original environment. Consequently, the efforts to harmonize these two "worlds" seem to bump into serious obstacles in everyday practice.
"Intermediate exposure" thus means that she cannot actually meet the expectations of either of these communities without contradicting herself. Although she has gained all the knowledge and experience to work as a graduate intellectual, she cannot maintain it because her financial means do not allow her to lead the life she desires. Forced back into her own community, she experiences the consequent situation as an irresolvable step backwards, which she defends herself against by constantly referring to the planning of relocation.
At the same time, it is also a fact that she cannot fully become part and parcel of her own original community either, because she cannot follow a professional career path parallel with community expectations and, when she pushes this urge into the background, she gets into a conflict with herself. The dilemma of this life situation, at least as it seems, cannot be solved on one's own, without outside help. The assertion of intermediate exposure indicates that, on the basis of personal life expectations and education, social status cannot be effectively reconciled with both the opportunities and the physical circumstances.
---
Theoretical conclusions
Based on the overall research experience gained so far, it may be safely stated that one of the hindering factors of the social integration of young Roma intellectuals is the development of "intermediate exposure" into a condition that rules and determines their personal life path.
During the course of examining the relationship between the Roma minority in Hungary and the majority society, it is important to keep in mind that the relations between ethnic groups and local communities are subject to dynamic change processes. The reasons for this can be traced back to both external and internal influences on communities, as well as a combination of these. The process of changes occurring in social group relations and the relations of individuals involved in them is interpreted by the discipline of anthropology from a number of different aspects. In this analysis, we intend to raise some of the most important elements of the general structural issues associated with the social mobility of intellectuals of Roma origin.
The low number of intellectuals of Roma origin in Hungarian society -less than 1% of Roma people graduate from college -gives rise to the hypothesis that, for most people, a career as an intellectual is a first-generation undertaking, which turns into a process involving a change of social status.
The change of status resulting from mobility can sorted out by relying on Árpád Szakolczai's conceptual approach for interpreting the integration process of Roma intellectuals. As a basic principle, the author proposes to use four categories of analysis closely related to each other when describing the phenomenon of status change: (1) liminality, (2) imitation, (3) trickster and (4) schizmagenesis. 29 From the perspective of our topic, it is primarily liminality that requires further explanation. The concept of liminality has a long history in anthropology, as it was first introduced by Arnold van Gennep in 1909 in his book Rites of Passage. His thesis was based, among other things, on his field research in Madagascar. Gennep contends that rites of passage are "universal anthropological phenomena that accompany individuals and communities through various transitional points in human and social life, helping to make the transition between two stable states" 30 . In his interpretation, liminality is the middle stage of a rite of passage, which is also a central moment. What does this all mean? In order to emerge from the liminal phase, one must meet certain requirements and, where appropriate, tests, depending on socio-environmental characteristics. 31 Rites of passage are associated with the transition from one age group or one human condition to another.
Gennep's book was translated into English only in the 1960s. Then, in 1963, the concept of liminality was introduced into academic anthropology in connection with the name and work of Victor Turner. It was then, within this framework, that the interpretation of rites of passage became a priority topic of anthropological research. Since the 1990s, the term has been increasingly commonly used to analyze societies as they move and transform.
Important research on the social integration of Roma intellectuals in Hungary has been carried out by Klára Gulyás, who proposes to summarize their mobility characteristics in the concept of permanent liminality. In her interpretation, this refers to the life situation that characterizes the development of the social role identity of Roma graduates in the process of becoming intellectuals. This condition occurs when they "move away from their community of origin as a result of the social/mobility process, but do not become accepted members of the majority professional community and the broader majority community" 32 .
However, the analysis of mobility trajectories based on in-depth interviews shows that the concepts of liminality and permanent liminality may only partially describe the situation in which Gypsies living in Hungary find themselves upon starting intellectual careers. The anthropological meaning of liminality is that the transient nature of this state of existence is temporary, and the situation itself, as well as the social condition associated with it, necessarily ceases to exist. Contrarily, the concept and meaning of "intermediate exposure" emphasize that, in the light of the careers of numerous intellectuals of Roma origin, this state of being is not temporary, and the condition of being trapped between different social expectations and oftentimes systems of prejudices cannot be overcome. While liminality has a start and an end 30 Gennep cited by Szakolczai 2015: 5. 31 Liminal conditions can be ritually regulated trials, as in the case of rite of passage ceremonies, or simply a series of ritualized events as, in most cultures, the observance of cultural rules around marriage. 32 Gulyás 2021: 8. point, and those affected can pass through this stage as soon as they are incorporated, "intermediate exposure" -at least, as research experience reflects this -is a permanent state. 33 The life path of the young person of Roma origin presented in this study, who is a college graduate and an intellectual, taking a white-collar career path, highlights the duality that is called the life experience of neither belonging here nor there, while the socio-cultural characteristics that make up the general circumstances of the situation allow a comprehensive interpretation of the phenomenon. Ultimately, this topic could be discussed and sorted out as a general obstacle to the social integration of Roma intellectuals. Based on our professional experience, it can be stated that, in order to eliminate "intermediate exposure" in the phase of liminality, attendance and assistance from the majority society is required. Help or assistance here means giving young college graduates of Roma origin the opportunity to prove themselves on the labor market and, at the same time, a chance to become full members of society while preserving elements of their own culture. In addition, it is equally important that they should arrange their relations with their own original environment in such a way that their change of status and role would not evoke a voice of rejection on the part of said environment, but would allow them to see the opportunity offered by the role model. | 37,003 | 878 |
9ab04224a571a07d600cee0bef95b0332a37447c | Food shopping transition: socio-economic characteristics and motivations associated with use of supermarkets in a North African urban environment | 2,010 | [
"JournalArticle",
"Review"
] | Objective: In the context of the nutrition transition and associated changes in the food retail sector, to examine the socio-economic characteristics and motivations of shoppers using different retail formats (large supermarkets (LSM), medium-sized supermarkets (MSM) or traditional outlets) in Tunisia. Design: Cross-sectional survey (2006). Socio-economic status, type of food retailer and motivations data were collected during house visits. Associations between socio-economic factors and type of retailer were assessed by multinomial regression; correspondence analysis was used to analyse declared motivations. Setting: Peri-urban area around Tunis, Tunisia, North Africa. Subjects: Clustered random sample of 724 households. Results: One-third of the households used LSM, two-thirds used either type of supermarket, but less than 5 % used supermarkets only. Those who shopped for food at supermarkets were of higher socio-economic status; those who used LSM were much wealthier, more often had a steady income or owned a credit card, while MSM users were more urban and had a higher level of education. Most households still frequently used traditional outlets, mostly their neighbourhood grocer. Reasons given for shopping at the different retailers were most markedly leisure for LSM, while for the neighbourhood grocer the reasons were fidelity, proximity and availability of credit (the latter even more for lower-income customers). Conclusions: The results pertain to the transition in food shopping practices in a south Mediterranean country; they should be considered in the context of growing inequalities in health linked to the nutritional transition, as they differentiate use and motivations for the choice of supermarkets v. traditional food retailers according to socio-economic status. |
As a corollary of rapid economic development, middleincome countries are experiencing a rapid nutritional transition, featuring marked changes in diet and lifestyles (1) . In this context, major transformations in the food retail sector have been observed, including a sharp rise in the number of supermarkets (2) . In urban areas of developing countries, large-scale food retailers are tending to replace traditional markets, neighbourhood stores and street sellers; this process is referred to by some authors as 'supermarketisation' (3) . Until recently attention was focused more on the potential consequences of such supermarketisation for the agricultural sector (4,5) , and the results of the few studies linking the development of supermarkets to possible changes in food shopping habits and dietary intake have been mixed. However, a recent comprehensive review of the dietary implications of supermarket development worldwide (6) clearly showed that the continued development of supermarkets will have major implications.
Beyond the influence on food consumption for regular users, the implications of the development of supermarkets for dietary intake at the population level also depend on the prevalence of exposure to these retail outlets. Regarding this issue in developing countries, some authors (7,8) have proposed a three-step model of diffusion in which supermarkets first appeal to upperincome consumers, then to the middle class and finally to the urban poor, because prices tend to drop as supermarkets continue to spread. However, in urban areas of developing countries, supermarkets currently appear to coexist alongside small-scale commercial outlets (9) , central food markets, neighbourhood stores and sellers of street food. Among the characteristics of supermarkets that have implications for consumers' diets are their location and format (6,10) . However, to our knowledge, no study has yet analysed the socio-economic characteristics of the shoppers who use these different retail formats.
Tunisia (a North African country) is experiencing major economic, epidemiological and nutritional changes (11,12) , with a rise in the number of modern supermarkets including the recent opening of two 'hypermarkets' in the vicinity of the capital city, Tunis. Building on a previous paper on the associations between supermarket use and dietary intake (13) , the objective of the present analyses was to examine the socio-economic characteristics of shoppers using different retail formats in Tunisia, and their motivations for doing so. The retail formats were large supermarkets, medium-sized supermarkets and traditional outlets.
---
Methods
---
Study area
Tunisia, a south Mediterranean country, is located between Algeria and Libya, has a population of 10 million and a middle level of development (ranked 91/177 on the Human Development Index composite scale in 2005 (14) ). Our study area was Greater Tunis, with about 2 million inhabitants (15) . It is the most developed and urbanised area in Tunisia and has the most supermarkets. Medium-sized supermarkets have existed in Tunisia for decades, but since the beginning of the 2000s, a major change in the food retail landscape has taken place with the opening of two 'hypermarkets' in the Greater Tunis area. This has also had indirect results in that established supermarket chains have started opening new outlets as well as modernising their internal layout and sales practices (16) .
---
Subjects
A cross-sectional survey was conducted in November-December 2006 in Greater Tunis. Based on data from the 2004 census, the survey used a random, two-level (census area, household) clustered sample of households (17) . In each household, the person in charge of main food shopping was interviewed.
---
Data
Part of the survey questionnaire was derived from a preliminary qualitative phase (face-to-face interviews and focus group discussions) to identify the relevant contextual information.
---
Socio-economic characteristics
Socio-economic and demographic data were collected at both individual and household levels (Table 1). An asset-based household economic level proxy was computed by multiple correspondence analysis (18) from dwelling characteristics, utilities and appliances. The first principal component was used as a proxy of relative household wealth (12,19) and was used in analyses after breakdown into tertiles of increasing level (low, medium and high).
---
Type of outlet used for main food shopping
Although some analyses pertained to supermarkets, distinction was made between 'medium-sized supermarkets' (MSM) and 'hypermarkets', i.e. 'large supermarkets' (LSM), according to their surface area ($10 000 m 2 for LSM).
One reason for the choice of this definition, among others (16) , was because, beyond their surface area, hypermarkets in Greater Tunis differ from medium-sized supermarkets in that they are located in a shopping mall comprising a wide range of shops, cafe ´s/cafeterias and a car park, offer a wider range of fresh food departments (catering, bread and pastries, butcher, fishmonger) and also have larger non-food departments. Finally, although supermarkets of medium size are quite evenly distributed throughout Greater Tunis, both hypermarkets are located in the outskirts of the area. In this survey, 'grocers' (attar) are independent familyrun food outlets with a sales area of less than 50 m 2 (reference (16)). The term 'market' refers to traditional open-air or covered markets in town centres or neighbourhoods with rows of retailers (6) .
The survey questionnaire included items for which interviewees were asked to rank in order of priority (1st, 2nd or 3rd) the three types of outlets where they most frequently did their main food shopping, and also, for supermarkets, included items regarding time and distance to the outlets. For each type of retail outlet (LSM, MSM, grocer, market), binary variables coded whether interviewees used that type of outlet for their main food shopping (regardless of the rank). From the variables pertaining to MSM and/or LSM, a three-category hierarchical variable was computed: never shopped at supermarkets/shopped at MSM only (regardless of other types of outlets but excluding LSM)/shopped at LSM (regardless of MSM or other type of outlets). For both MSM and LSM, easy access (v. not) was defined as living less than 5 km or less than 30 min from a retail outlet.
---
Reasons for using the different types of food outlet
The questionnaire featured open questions, where subjects could state whatever reasons or motivations they associated with the use of each type of outlet. From the exhaustive list of answers, the twelve most frequently declared items were identified (Table 3) and used in the analyses.
---
Data collection
The questionnaire was translated into Arabic, pre-tested and validated with the target population. Subjects were interviewed at home by specially trained local nutritionists.
---
Ethics
The Tunisian National Statistical Council reviewed and approved the study (visa no. 11/2006). The surveyed subjects were informed of their right to refuse to take part and of the strict respect of the confidentiality of their answers, and gave their verbal consent to take part in the study.
---
Data management and analysis
Data entry, including quality checks and validation by double entry of questionnaires, was performed with Epi-Data version 3?1 (EpiData Association, Odense, Denmark). Data management was performed with the Stata statistical software package version 9?2 (StataCorp LP, College Station, TX, USA).
We assessed the associations between the multinomial response variable coding shopping at supermarkets (LSM, MSM or never) and socio-economic variables using multivariate multinomial logit regression models (20) . The strength of (crude or adjusted) associations was assessed by relative risk ratios, using 'never' as the reference response variable category. Correspondence analysis was used for analysis of associations between the type of retail outlet and reasons stated for their use (18) .
All analyses took into account characteristics of the sampling design (21) (clustering, sampling weights also including a post-stratification on sex, age and urban v. rural) using the appropriate svy commands of the Stata software. The complete-case analysis method was used to deal with missing data. Results are given as the estimate with its design-based standard error or confidence interval. The first type error rate was set at 0?05 for all analyses.
---
Results
---
Socio-economic characteristics
From a total of 753 households that were to be included in the study, 724 households were actually surveyed. Most (Table 1) were from an urban area and mean household size was 4?7 (SE 0?1; n 723). One-third of the households (data not shown) declared they owned a car. Two-thirds of the households declared they had a steady income, but only a minority declared they owned a credit card. Those in charge of food shopping were predominantly female; the mean age was 46?2 (SE 0?6) years, most were married; 24?0 % had no schooling at all, while 43?1 % had reached secondary level or higher; the majority (67?4 %) said they did not work outside the home.
---
Type of outlet used for main food shopping
Out of the total of 724 households, 58?8 (SE 4?3) % used supermarkets for their main food shopping (LSM and/or MSM), but only 27?3 (SE 3?6) % declared using LSM (regardless of MSM, grocer or market) and 32?2 (SE 2?9) % used only MSM (i.e. regardless of grocer and market but excluding LSM). Finally, only 4?5 (SE 1?3) % of households used only supermarkets for their main food shopping. Concerning time and distance (n 711), 74?1 (SE 4?2) % had 'easy access' to MSM v. 23?9 (SE 4?5) % only to LSM. Most households, 93?8 (SE 1?6) %, used their nearby grocer and 26?5 (SE 2?8) % used the market.
---
Socio-economic factors associated with shopping at supermarkets
Results of multinomial regression models are presented in Table 2 (n 703, complete-case analysis subsample). Crude associations showed that urban households were much more likely to shop at both MSM and LSM v. never, but in adjusted analyses the association persisted only for MSM. That small households shopped more at MSM v. never in unadjusted analysis did not stand the adjustment but persisted somewhat for shopping at LSM (linear trend P 5 0?001). For MSM, the sizeable unadjusted association with the economic level of the household was drastically reduced by the adjustment; conversely, the spectacularly strong unadjusted association between likelihood of shopping at LSM and increasing economic level, though reduced, was still remarkable once the confounding of other socio-economic variables was taken into account. Households with a steady income, a credit card or easy access were twice as likely to shop at MSM v. never, but when adjusted only steady income was still associated; for shopping at LSM v. never, unadjusted associations were stronger for steady income, owning a credit card and easy access, but although still significant, were much reduced after adjustment, indicating that their effect was greatly (though not entirely) confounded by other socio-economic variables. Concerning the characteristics of the person in charge of food shopping, age was not associated with use of MSM or LSM either before or after adjustment, even if the effect of the adjustment was towards more use of supermarkets by younger people. Neither the sex nor the marital status of the person in charge of food shopping was associated with the use of supermarkets. In unadjusted analyses, a high education level was clearly associated with shopping at MSM and even more for LSM; however, once adjusted, a strong independent association with education level only persisted for MSM, while it was much reduced for LSM (twice lower than for MSM). The observed unadjusted association with the professional occupation of the person was mostly confounded by other socio-economic variables.
Regarding access issues, additional analyses were also performed to specifically try to assess associations of supermarket use with car ownership (detailed data not shown). Unadjusted analysis revealed that it was indeed more associated with LSM than MSM use but its effect was entirely confounded by socio-economic variables.
---
Reasons for choice of type of retail outlet
Table 3 lists weighted percentages pertaining to the reasons (rows) given by users for their choice of a specific type of retail outlet (columns). Out of the twelve items, only two pertained to the food products themselves. An equal number of five items was related to characteristics of the store or the shopping itself; among these items, proximity was most often quoted by retail category but also over the whole sample of subjects. Figure 1 displays the combined rows/columns on the two first axes of the correspondence analysis of the choice data. The first and second axis account for respectively 77?1 % and 18?5 % of total inertia, so that the residual information not taken into account is minor; the high percentage of inertia on the first axis and the typical 'horseshoe' shape of the mapping indicate a mostly one-dimensional structure. Contributions to inertia (data not shown) on the first axis of row and column points revealed that the salient feature was that the subjects contrasted 'large supermarkets' (chosen for the 'leisure' dimension of shopping there but not their 'proximity') v. the 'nearby grocer' (chosen mainly because of 'availability of credit', and 'proximity' but also 'emergency shopping' and 'fidelity', but not 'good prices' and not 'quality choice'). Contrasts observed on axis two (details not shown) resulted in a much lower level of information indicating that markets were quoted as being differentially chosen v. all other types of retail because of 'freedom of choice', v. 'large supermarkets' because of their 'proximity', and v. 'grocer' because of their 'good prices'. It should be noted that reasons for the choice of 'medium supermarkets' were not very distinct, their profile being intermediary between 'large supermarkets' and other retail outlets.
Variations around these overall trends were observed according to socio-economic characteristics (detailed data not shown). There was a strong decreasing relationship between household economic level and likelihood of quoting credit as a reason for shopping at the nearby grocer (40?0 (SE 3?5) %, 18?8 (SE 2?6) % and 8?8 (SE 2?5) % for the lower, middle and higher tertile of economic level, respectively, n 685, P , 0?0001). Conversely, the probability of declaring using the nearby grocer for emergency food shopping increased with economic level (4?2 (SE 1?9) %, 13?3 (SE 2?5) % and 25?2 (SE 5?2) % for the first, second and third tertile, respectively, n 685, P 5 0?0001).
---
Discussion
In the context of a rapidly evolving nutritional transition and major changes in lifestyle, the present study assessed the relative importance of different types of food retailer (modern and traditional), the socio-economic profiles of consumers and the reasons behind the choice of the different types of outlets in Greater Tunis.
Concerning the overall use of supermarkets, while MSM were used by half the households, only just over a quarter of these consumers also used LSM. As expected, sharp contrasts between areas and socio-economic categories were observed as well as differences according to the type of outlet. A strong association was found with urban area only for supermarkets of medium size but not for large ones, but results pertaining to urban v. rural households should not be overemphasised given the mostly urban nature of the study population: the impact of supermarkets on peripheral rural areas warrants further research. Nevertheless, this result is not entirely surprising given the intraurban location of MSM in the district of Tunis v. more peripheral LSM. It also underlines the existence, all other things being equal, of location issues specific to the type of supermarket (rather independently of other socioeconomic factors, proximity was much more often quoted as a reason for the choice of MSM than for LSM).
Regarding the inverse association between small household size and LSM use, it is likely related to a combination of more 'modern' socio-cultural values (in relation with the demographic transition but also cultural values, e.g. whether or not several generations still live under the same roof) as well as the higher socioeconomic status of smaller households in the context. Although adjustment did reduce the strength of the association by half, it was still quite sizeable, especially for the smaller households; adjustment for socio-economic factors likely only partly accounts for the socio-cultural factors that underlie the relationship between the use of LSM and the size of the household. Concerning household socio-economic level, once adjusted, LSM use was shown to increase drastically with overall household wealth while the association was much weaker for MSM. Having a steady income was found to be independently associated with the use of both types of supermarkets. Having a credit card and easy access to supermarkets were quite specifically associated with LSM but nevertheless strongly confounded by other socio-economic factors (mostly household wealth). For these three factors, the association was nevertheless weak compared with household overall wealth. Among all the characteristics of the person in charge of food shopping, only a specific effect of a higher level of education was clearly associated with shopping at supermarkets, and the association was much stronger for medium-sized than large supermarkets. Concerning age, once adjusted for socioeconomic confounders, associations with age were in line with the hypothesis that shopping at supermarkets and especially LSM would be more frequent among younger customers; but conditional on size of the sample, this could not be inferred to the study population.
Thus, overall, we found that the use of supermarkets is more frequent among socio-economically privileged and more educated consumers in Greater Tunis. This suggests that, in the Tunis area, although supermarkets have been there for a long time, supermarket development is still only at the first step of the model of diffusion. This contrasts with Kenya, a low-income country where 60 % of the 30 % poorest consumers shop at supermarkets (22) . Given the three-step diffusion model, this implies that there are context-specific diffusion issues, either cultural or linked to different levels of economic development, or to the relative characteristics of the other types of
-1•0 -0•5 0 0•5 1•0 1•5 Axis 1 (77•1 % of total inertia)
Fig. 1 Bi-plot of the first two axes of the correspondence analysis of reasons stated for the choice of type of food outlet, Greater Tunis, Tunisia, 2006. Labels are centred on (x, y) coordinates; SM, supermarket food retail outlets. It could also be that, in Greater Tunis, MSM and LSM are not at the same stage of diffusion.
If we consider that MSM and LSM have in common selfservice and differ mainly in their surface area, we could have expected fewer differences between consumer profiles in the two types of retailers. Yet, as indicated by the striking difference between MSM and LSM consumer profiles according to household economic level, we can hypothesise that LSM are at an earlier stage of the supermarkets' diffusion model than MSM, the latter being, at the same moment in time and in the same town, at a more advanced stage. It could also be that, rather independently of the three-step model, MSM and LSM have and will always have their specific consumers, with specific motivations (e.g. leisure for LSM).
Another salient point of our results is that although for a tiny minority of consumers (4?2 %) the main shopping place is supermarkets to the exclusion of all other types of retail outlet, most households still shop at their neighbourhood grocer, whether or not they shop at supermarkets. This suggests that food shopping practices in Greater Tunis are in a transition stage with a combination of both modern and traditional retail food outlets. Indeed at the national level, even if modern supermarkets are increasing in concentration and popularity, the bulk of Tunisian food retailing is still dominated by small neighbourhood grocery shops (23) of which there are around 250 000 in the whole country. These shops are evenly distributed, including in strictly residential neighbourhoods that otherwise feature no commercial activity, so that most inhabitants of our study area are within short walking distance from an attar.
The fact that food shopping still relies heavily on more traditional types of outlet is all the more true for shoppers whose socio-economic status is low, of whom only 4?9 % were found to use hypermarkets and only about a quarter to use MSM in addition to shopping at their local grocer or market. Overall, the reason that most contrasted the choice of grocers v. other types of retail was 'availability of credit'. In other contexts (Brazil and China), it has been shown that supermarkets are starting to offer consumers credit cards and even banking services (24) but in our study area, availability of credit was very clearly quoted mostly only for the neighbourhood grocer. Regarding incomespecific differences pertaining to the importance of the availability of credit, households of the lower tertile of economic level were five times more likely to quote this reason for their choice of grocer than those of the higher tertile. This may seem paradoxical, since purchasing food in small quantities from local retailers on a daily basis generally costs more (25) , and this feature also stood out in the present study as the neighbourhood grocer was the type of retailer by far the least likely to be associated with good prices or promotions. Nevertheless, for the poorest consumers, the local grocery shop is the main and probably only place where they buy food due to lack of a sufficient steady income (which has been shown to be more associated with supermarket use) despite the fact that, regarding the food products, this type of outlet is much less frequently associated with good quality choice than the other three types of retailers. Interestingly, households from the higher income tertile were six times as likely as the lower textile to quote 'emergency shopping' as a reason for using the attar, indicating that although this type of retail outlet is widely used by all categories of households, the reasons for doing so are very different.
In addition to financial matters, it was also shown that traditional food retail fulfils social functions, as consumers are still attached to their personal relationship with their local shopkeeper; indeed, this system better meets consumer's social and cultural expectations by allowing them to increase their contact with the outside world in a way that the modern distribution system cannot (8,26) . Although the latter dimension was not directly assessed in our study, the fact that fidelity was much more often quoted as a reason for shopping at the attar v. other types of food retail is likely related to these social and psychological co-factors.
The development of supermarkets is indeed an issue that concerns the diet of high-or middle-income consumers in our study area. Nevertheless, the almost exclusive use of street corner stores for food shopping by lower-income consumers is also an issue. In other settings, some authors have described the emergence of urban 'food deserts,' deprived areas where low-income people have poor access to whole foods e.g. to fruit and vegetables, with probable negative consequences for health (27,28) . The main underlying factors are wealthier people moving from the centre towards the suburbs and with them the supermarkets that used to be located in the city centres. The situation is currently somewhat different in our study area as both traditional markets and many of the medium-sized supermarkets are still located in the downtown area. But this may change over time and indeed, despite the rise of supermarkets, the importance of corner stores should not be overlooked, e.g. for nutrition interventions targeted through the food retail sector (29)(30)(31) .
Regarding the characteristics of the study, its strengths are that the questionnaire was based on a preliminary in-depth qualitative study, that it featured detailed analyses according to the different types of supermarkets and food retail outlets and conducted a detailed assessment and analysis of the motivations behind the choice of the different types of outlets. As for its limitations, one is the cross-sectional design of the survey, which always makes it difficult to interpret observed associations as causal even when care is taken to adjust for relevant confounders (32) . The quantitative analysis of declared motivations would have needed to be completed by exploring complex items in more detail (such as 'quality-choice' which could be interpreted differently depending on the type of product it actually refers to). Generalisability issues are always of importance. However, although a small country, Tunisia is emblematic both of fast emerging developing countries from an economic/development point of view, and also of a wide range of south and east Mediterranean countries that share societal and cultural issues. Nevertheless, the results of the present study regarding socio-economic characteristics associated with use of the different type of food retails outlets, though partly similar to those observed in Madagascar (33) , do differ from those in observed in Kenya (22) , Brazil (24) and Guatemala (34) . These results show that supermarketisation in the developing world does not operate homogeneously and does not have the same effects in every country. Moreover, our results based on a cross-sectional analysis in 2006 are time-specific and whether or not the current trend in supermarketisation in developing countries will persist is an open question (33) .
In emerging countries, in the context of major economic and societal changes, changes in the food retail sector, including the rapid development of supermarkets, have been shown to have consequences for dietary intakes. Nevertheless, studies providing evidence regarding consumers' motivations as well as socio-economic profiles with respect to the type of food outlet for food shopping are rare in south Mediterranean countries. The present study is thus pioneering with respect to changes in food shopping attitudes and practices linked to the modernisation of food retailing in this context. Indeed, we derived substantiated results regarding the actual influence on food shopping habits: (i) the overall limited use of supermarkets by the study population; (ii) the still predominant role of neighbourhood grocers whether or not combined with supermarket use depending on socioeconomic status; (iii) the differential socio-economic profiles of customers of the different types of supermarkets; and (iv) the reasons that motivate use of the different types of outlet. South and east Mediterranean countries are experiencing a fast evolving nutrition transition where obesity and nutrition-related non-communicable diseases are becoming prevalent also among the lower socioeconomic strata (12) . In this context, it could seem feasible and cost-effective for those in charge of nutrition policies to address this issue by implementing nutrition interventions (e.g. financial incentives, nutrition education, promotion of 'healthy' products, informative labelling) only through centralised types of retail such as supermarkets. But the results of the present study underline that such interventions would likely both not cover a significant part of the population and mainly reach only customers of higher socio-economic status, with thus the risk of increasing inequalities regarding food consumption and nutrition-related non-communicable diseases instead of reducing them. | 28,259 | 1,807 |
d7d9d7581f62cdeb9201a828553a0c0cada3beb6 | Health literacy, health status and health behaviors of German students– study protocol for the “Healthy Habits” cohort study | 2,021 | [
"JournalArticle"
] | Background: The emerging adulthood is traditionally viewed as a time of optimal health, but also as a critical life span, characterized by changing life circumstances and the establishment of an individual lifestyle. Especially university life seems to hold several challenges impeding the manifestation of a health supporting manner, as many students tend to show a poorer health behavior and a higher amount of health-related problems than comparable age groups. This, along with a steady growth of the higher education sector, brings increased attention to the university setting in the context of prevention. To date, there are few empirical longitudinal and coherent cross-sectional data on the status of students' health literacy, health status, and health behaviors, and on the impact of the study format on students' health. The aim of this prospective cohort study is to reduce this research gap. Methods: Starting during winter semester 2020/21, the prospective cohort study collects data on health literacy, health status and health behavior on a semester-by-semester basis. All enrolled students of the IST University of Applied Sciences, regardless of study format and discipline, can participate in the study at the beginning of their first semester. The data are collected digitally via a specifically programmed app. A total of 103 items assess the subjectively perceived health status, life and study satisfaction, sleep quality, perceived stress, physical activity, diet, smoking, alcohol consumption, drug addiction and health literacy. Statistical analysis uses (1) multivariate methods to look at changes within the three health dimensions over time and (2) the association between the three health dimensions using multiple regression methods and correlations. Discussion: This cohort study collects comprehensive health data from students on the course of study. It is assumed that gathered data will provide information on how the state of health develops over the study period. Also, different degrees of correlations of health behavior and health literacy will reveal different impacts on the state of students' health. Furthermore, this study will contribute to empirically justified development of target group-specific interventions. Trial registration: German Clinical Trials Register: DRKS00023397 (registered on October 26, 2020). | Background
The emerging adulthood (age span of [18][19][20][21][22][23][24][25] is traditionally viewed as a time of optimal health with low levels of morbidity and chronic disease [1,2]. At the same time, young adults appear to be more prone to psychosomatic health symptoms, depending on their individual life satisfaction and perceived future outlook [3,4]. Characterized by changing life circumstances, personal growth and the manifestation of a certain lifestyle, the emerging adulthood is a distinct life phase [5,6]. In comparison with other age groups, young adults tend to consume more alcohol, tobacco and drugs [7,8]. Therefore, this life stage occurs as a vulnerable and critical time, in which specific health interventions might help paving the way for a healthy lifestyle. Especially university life can hold several challenges for students impeding the manifestation of a health supporting behavior [9].
On the one hand, the variety of study formats opens up considerable freedom for individually adaptable life concepts, such as studying alongside a part-time or fulltime job, flexible lecture periods or studying during parental leave. The proportions on the spectrum from purely physical presence on site to exclusively digital forms of learning and examination from home can be selected according to the students' individual life situation [10]. The university setting receives increased attention in the context of prevention, both because of the described health situation of students and a steady growth of the higher education sector [10]. Especially Universities of Applied Sciences (UAS) register an increasing number of students due to offering simplified access for professionally qualified persons, (study) flexibility and a high diversity of studies in the form of dual and part-time courses [10,11].
On the other hand, this freedom and flexibility seem to come with a price. Changes in stress situations and strain parameters can be observed when it comes to meeting work and study requirements. Some studies identified factors such as double and multiple burdensome-situations, a disruptive studyfamily-balance, an uneven study-leisure-time-balance and severe work-related psychological stress situations [12][13][14][15]. Other requirements that students face during their studies include, for example, mastering demanding curricula, time-consuming workloads as well as mental and emotional challenges [16]. Current research of students' health in Germany reveals an increased burn-out potential, an overall increased stress load, an above-average level of anxiety, sleep disorders, physical symptoms such as body aches or back pain and an overall subjectively lower-rated health status than comparable cohorts [12,[17][18][19][20][21]. As part of the HISBUS Panel, a large-scale crosssectional study with a total net sample of n = 6198, female participants in particular reported physical and psychological complaints. Additionally, about 75% of the HISBUS cohort stated to suffer from physical complaints several times a month [17]. The students' health status seems to reflect the consequences of permanent overload in diverse ways.
Studies indicate, that a poor state of health might result from the interaction of multiple factors, e.g., an insufficient health behavior or a low degree of health literacy [22]. The majority of studies pictures a linear relationship between the three health dimensions, stating that health literacy influences the health behavior of a person and thereby impacts health outcomes [23]. Contrary to that, some studies report a different constellation of the three health dimensions, where this linearity has not been observed at all or not even discover an association between health literacy and certain health behaviors, e.g. smoking health professionals [24,25]. In fact, current studies on college students' health behavior and health literacy point to a linear as well as reciprocal relationship. Accordingly, a linear view with only consecutive seems to fall short, for the dynamic of interactions, feedback effects as well as antecedents and consequences cannot be integrated [26]. Accompanying, external or social factors can increase the interaction of the health dimensions, influencing the state of health positively or negatively. With regard to health behavior, the above-mentioned stressors have a negative effect on the amount of students' physical activity and nutritional behavior [17,27,28]. Drug and alcohol consumption have also been shown to increase among students [17,29]. Although to interpret with caution, the HISBUS Panel [17] attested students a poorer health behavior in many aspects compared to non-students of the same age. In particular, the results revealed lower levels of physical activity, increased alcohol and nicotine use [29], abuses of cocaine and cannabis, as well as increased intake of painkillers [17].
In this context, health literacy is an important individual competence and related to an overall literacy. It includes knowledge, as well a set of cognitive, social and motivational skills, enabling people to access, understand, appraise, and apply health information [26,30,31]. Also, health literacy entails the capacity of making health-related judgements, taking decisions and establishing health-promoting behaviors on a daily basis (e.g., a healthy diet, physical activity, stress management) [32][33][34]. This understanding suggests, that health literate students are more likely to address the requirements and burdens described.
Despite the need of gaining more understanding of the complex nature of the relationship between the abovementioned health dimensions, these studies also show different characteristics of the health dimensions among the students. This suggests the necessity of different approaches within the framework of possible health interventions.
Against this background, the aim of this cohort study is to gain insight in the relationship and change of UAS students' health literacy, health status and health behaviors during their studies. Empirical inventories of student health differ both in their understanding of health and in the indicators collected [35,36]. Thus, the cohort study's assessment incorporates the broad categories of Dietz et al's systematic umbrella review [36] to provide further clarification on the factors influencing student health (substance use, mental health/wellbeing, diet and nutrition, physical activity, sleep hygiene, media consumption and others). In this context, the following research questions will be addressed:
1. How do health behavior, health status and health literacy change during the course of study and after graduation (12 months post)? 2. What influencing factors on health behavior, health status and health literacy of UAS students can be identified?
---
Methods / design
The German health promotion initiative "health-promoting university" is the overarching framework of the initiated Healthy Habits research project [37]. The cohort study is founded on a biopsychosocial and salutogenetic approach and assumes a multidimensional health continuum [38,39]. If the salutogenetic approach is applied to the health of individuals, a three-way split emerges, where the state of health dynamically results from the aspects of health behavior as a generalized source of resistance and health competence as a superordinate empowerment in the sense of coherence. In summary, this leads to an understanding of health as a multidimensional and dynamically interacting construct, with the three core dimensions health status, health literacy and health behavior (see Fig. 1).
---
Design of the study
The research design follows a longitudinal, prospective cohort study of enrolled UAS students at the IST University of Applied Sciences in Germany. STROBE (strengthening the report of observational studies in epidemiology) guidelines were applied in alignment with the research objective [40]. The frequency of data assessment is set to a semester-by-semester cycle (see Fig. 2).
During the winter semester 2020/2021 the first semester students are being recruited for the first time.
---
Sample and sample size
Students have been invited by email to participate in the cohort study and additionally have been introduced to the Healthy Habits project (official German website under https://healthyhabits.ist.de/) in several seminars at Fig. 1 The multidimensional and dynamic construct of students' health as the underlying construct the beginning of the semester. The email contains information of the study, an invitation link to the research homepage and an identification code. The invitation email has been sent to all active and enrolled first semester students of all departments (sports business, fitness & health, tourism & hospitality, communication & business). Students, which have set their status to inactive (e.g. maternal break or personal matters) for more than one semester won't be included. Since this is an exploratory cohort study no formal sample size calculation was done. We assume the participation rate of firstsemester students to range from 20 to 40%. This would mean an average dataset of n = 400 per semester. This calculation is made defensively due to the constraints imposed by the Covid-19 pandemic.
---
Data collection
Data is collected online using a questionnaire tool implemented in a progressive web application. This app is specially programmed for this research project. The questionnaire can be edited step by step, answers are saved automatically. There is no possibility to skip single items. After answering all questions, the students can submit their results and with that make no further changes. Gathered data is stored on a separate server, taking into account current European as well as federal data protection security standards (DSGVO) in full. A connection to student records at the IST University of Applied Sciences is excluded, nor is the project team able to gain access to the user profile credentials.
---
Variables under study and assessment
Health status, health behavior and health literacy are registered on the basis of different domains, for which a positive correlation with the respective health dimension could be determined.
Health-related quality of life, sleep quality, overall life satisfaction, self-perceived stress and self-perceived health status are seen as predictive measurements for health status [9,41,42]. To assess the dimension of health behavior the domains of health-related physical activity, screentime, nutritional behavior, alcohol consumption, smoking habits and drug consumption are referred to [43]. Health literacy is the only dimension which is validated as a construct itself and will therefore not be predicted through other surrogate constructs. Table 1 provides an overview of the selected constructs and the primary outcome parameters to operationalize the three health dimensions. To gather comparable data, the selection of variables was based as far as possible on similar studies on each of one of the three dimensions.
The assessment is composed of 10 established questionnaire-based instruments with a total of 101 items. As Table 1 shows five instruments are used to assess health status. Health behavior uses a total of four instruments. One instrument has been selected to assess health literacy.
To obtain a representative picture of students' health status a single-item of the Minimum European Health Module (MEHM1), 5 items of the German version of the Satisfaction With Life Scale (SWLS), 7 items of the German Life-and Study-Satisfaction-Scale (LSZ), 10 items of the German version of the Perceived Stress 1). Health-related behavior covers a variety of behavioral domains and their measurement in large cohort studies is very complex. For the described research project, the domains of physical activity, screentime, nutrition, smoking habits as well as alcohol and drug consumption are of interest. Related data is collected by using 8 items of the Physical Activity section of the European Health Interview Survey (EHIS-PAQ), 6 items of the Brief Alcohol Screening Instrument in Medical Care (BASIC). Smoking habits (1-3 items), drug consumption (7 items) and nutrition behavior (13 items) is assessed with a total of 23 adapted items of the FEG-questionnaire (original: Fragebogen zur Erfassung des Gesundheitsverhaltens [Questionnaire to assess health behavior]). Non-smoking participants have to answer only 1 item and are led to the next domain. To measure time spent with digital devices 6 items of the self-rated Screen-time Questionnaire [63] were selected, modified and supplemented.
The 16-item European shortform of the health literacy Survey (HLS-EU Q16) concludes the assessment. The authors of this paper reviewed the critics of the original version of the HLS-EU [72] and therefore selected the latest updated shortform of the instrument. The published reference values as well as the statistical supported counter publication underline the benefits of the HLS [34].
For all instruments items' content and answering format are used as published and have only been modified to fit the digital progressive web application.
---
Statistical analyses
Descriptive statistics (mean, distribution standard deviation (SD), median, minimum, maximum, absolute and relative frequencies) will be conducted to describe the cohorts' sociodemographic features (gender [male/female/diverse]; age [year of birth]) and study-related characteristics (type of degree [BA/MA], field of study [health-related studies vs. non-health-related] and study format [dual/part-time/full-time]). This stratified analysis will apply for all statistical analysis.
The changes in health behavior, health status and health literacy (research questions 1&2) will be each evaluated by means of variance analysis with measurement repetition. After checking the statistical model prerequisites, sociodemographic and study-related influencing factors on health behavior, health status and health literacy will be each tested by means of linear regression analysis. For all calculations the level of statistical significance will be set to p < 0.05 [73] and SPSS® (Statistical Package for the Social Sciences, IBM, Version 27) will be used.
---
Discussion
Attending a university or UAS is a lifechanging event in general and can be a very formative phase of life for young adults. Students will learn to deal with stress, the burden of learning for exams, setbacks as well as successes and overall to take responsibility for themselves. Unfortunately, taking care of one's own health is not always priority number one during that phase of life. Current studies provide indications that students show a poor health behavior [17,29]. The overall consequences of an unhealthy lifestyle as well as the insufficient management of psychophysical requirements are not only reflected in a poorer state of health, but also have an impact on the course of the study. Lower academic performance, a significantly longer duration of study and even drop-outs are possible consequences [16,74]. According to the German Center for Higher Education and Science Research (original: Deutsches Zentrum für Hochschul-und Wissenschaftsforschung [DZHW]) the dropout rate ranges between 15 to 35% depending on the type of study and the subject [75].
To address these aspects efficiently and sustainably with interventions, requires a further understanding of how health changes during the course of study as well as of the impact of influencing factors. A mere consideration of health status does not fulfill the complexity, since it is not always known whether a poor health status results from an insufficient health behavior or a lack of competence. Recent research shows that only about 30.3% of students have sufficient health literacy [76]. There are also significant differences between male and female students. Furthermore, students with a migrant background as well as students with lower degrees (bachelor' degrees) and first semester students have significantly poorer health literacy [77][78][79]. These studies also suggest the existence of different target groups within the setting of UAS students which in turn should be approached differently with tailored interventions. To the authors best knowledge such comprehensive studies have not been sufficiently conducted yet in an UAS setting.
Contrary to growing scientific interest in student health research in recent years, the current amount of data is consistently inadequate. Most of the existing studies either looked at the three health dimensions separately from each or are mostly based on cross-sectional examinations [9,17,20,41]. Longitudinal studies on the three health dimensions over the course of the study, on the other hand, are rare. Also, the quantity and quality of studies investigating the association between the described health dimensions and their mutual influence among themselves within the setting of students are insufficient as well.
Despite the mentioned promising potential of the Healthy Habits research project, field research challenges as well as limitations have to be mentioned. In consequence of the Covid-19 pandemic the starting of participants' recruitment had to be postponed to December 2020. In addition, as a result of federal restrictions all in-person seminars are prohibited, so that for the entire winter semester 2020/2021 only online-based seminars are offered. First semester events such as initiations and other in-person inauguration seminars have been canceled. Therefore, the communication with the students can only take place digitally.
Another potential distortion can be caused by assessment. After completing the app-based questionnaire, the results are displayed in form of a radar chart. Each health dimension is displayed separately, reflecting aspects of the selected assessment instruments. The authors are aware of the fact, that receiving an evaluation of one's questionnaire responses might be seen as a first health intervention, increasing students' awareness for health topics. The overarching intention is to motivate students to participate in the assessment sustainably.
The Healthy Habits research project major strengths are the longitudinal design and the app-based approach to reach a more and more digital affine target group. This mainly digital approach widens the spectrum of possible interventions, which also varies by format, content and degree of individualization. Fields of actions (original: Handlungsfeld) are legally defined areas in which preventive interventions have to take place, including physical activity, diet, stress and addiction. Next to classic course interventions, additional formats may include gamification elements such as challenges or quizzes, push-up messages, podcasts, blogs, webinars or scribble videos. Also, it is possible to address subgroups or single individuals of the target group by assigning achieved assessment scores to certain interventions. The findings will bring greater understanding of how to address student's challenges with tailored preventive interventions.
---
Availability of data and materials
The datasets used and/or analysed during the study will be available from the corresponding author on reasonable request.
---
Declarations
---
Ethics approval and consent to participate
For future publications based on the described research project ethical approval was granted by the independent ethics committee of German Sports University Cologne on October 21st 2020 (version 1.0; reference 146/ 2020) including participant information material, website information and informed consent form. The written consent to participate is given by the students with the first log-in to the research project website.
---
Consent for publication
Not applicable.
---
Competing interests
The authors declare that they have no competing interests.
---
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 20,159 | 2,362 |
22fb1e898cd06a35bbb57ecfbc2d35f214187316 | Safer Sex in Later Life: Qualitative Interviews With Older Australians on Their Understandings and Practices of Safer Sex | 2,018 | [
"JournalArticle",
"Review"
] | Rates of sexually transmitted infections (STIs) are increasing in older cohorts in Western countries such as Australia, the U.K. and the U.S., suggesting a need to examine the safer sex knowledge and practices of older people. This article presents findings from 53 qualitative interviews from the study "Sex, Age & Me: a National Study of Sex and Relationships Among Australians aged 60+." Participants were recruited through an online national survey. We consider how participants understood "safer sex," the importance of safer sex to them, the safer sex practices they used (and the contexts in which they used them), and the barriers to using safer sex. Older adults had diverse understandings, knowledge, and use of safer sex practices, although participants tended to focus most strongly on condom use. Having safer sex was strongly mediated by relationship context, trust, perceived risk of contracting an STI, concern for personal health, and stigma. Common barriers to safer sex included erectile difficulties, embarrassment, stigma, reduced pleasure, and the lack of a safer sex culture among older people. The data presented has important implications for sexual health policy, practice, and education and health promotion campaigns aimed at improving the sexual health and wellbeing of older cohorts. | Introduction
A raft of recent research illustrates that many people continue to have sex well into and throughout later life (Bergstrom-Walan & Nielsen, 1990;Bourne & Minichiello, 2009;Field et al., 2013;Lindau et al., 2007;Mercer et al., 2013;Schick et al., 2010). Indeed, Swedish research by Beckman, Waern, Ostling, Sundh and Skoog (2014) illustrated that levels of sexual activity may be on the rise amongst older cohorts, suggesting it is increasingly important that we pay attention to the sexual health and well-being of older people. For many, sexual expression, pleasure and identity remains important as they age, although research also indicates there is diversity amongst older adults in this regard (Field et al., 2013;Fileborn et al., 2015a;Fileborn, Thorpe, Hawkes, Minichiello, & Pitts, 2015b;Gott & Hinchliff, 2003;Gott, Hinchliff & Galena, 2004;Lindau, Leitsch, Lundberg, & Jerome, 2006;Minichiello, Plummer & Loxton, 2004). Changes in social norms in the Englishspeaking West regarding the acceptability of divorce and re-partnering after divorce or the death of a partner also mean that there are greater opportunities for initiating new sexual and romantic relationships in older age (Bateson, Weisberg, McCaffery, & Luscombe, 2012;DeLamater, 2012;DeLamater & Koepsel, 2015;Idso, 2009;Nash, Willis, Tales, & Cryer, 2015). Accompanying advances in technology and the development of online dating have facilitated the process of finding sexual and romantic partners in middle and later life (Bateson et al., 2012;Malta, 2007;Malta & Farquharson, 2014). This continuation of sexual activity and shifts in sexual partnerships in later life have, however, been accompanied by increases in the rates of sexually transmitted infections (STIs). While older cohorts still make up a minority of STI diagnoses overall, rates in these groups have steadily increased in many countries across the Anglophone-West. For example, in Australia, rates of chlamydia diagnoses rose from 16.4 per 100,000 in 2010 to 26.6 per 100,000 in 2014 in the 55-64 age group (The Kirby Institute, 2015). Rates of gonorrhoea and syphilis in this age group also rose during this timeframe. This mirrors international trends across countries such as the U.S. and U.K. (Centers for Disease Control and Prevention, 2014;Minichiello, Rahman, Hawkes, & Pitts, 2012;Poynten, Grulich & Templeton, 2013;Public Health England, 2016). Despite rising STI rates in older populations, we know surprisingly little about their safer sex practices and knowledge of STIs. The limited research undertaken to date suggests that older people do not consistently practice safer sex (Altschuler & Rhee, 2015;Bourne & Minichiello, 2009;Dalrymple, Booth, Flowers, & Lorimer, 2016;de Visser et al., 2014;Foster, Clark, McDonnell Holstad, & Burgess, 2012;Grant & Ragsdale, 2008;Lindau et al., 2006;Reece et al., 2010;Schick et al., 2010), may lack effective condom use skills (Foster et al., 2012), and report low rates of testing for HIV and STIs (Bourne & Minichiello, 2009;Dalrymple et al., 2016;Grulich et al., 2014;Schick et al., 2010;Slinkard & Kazer, 2011).
Several authors have noted that older heterosexual women may face a higher susceptibility to HIV and STI transmission on account of the physiological and hormonal changes that typically accompany ageing, such as decreased estrogen production leading to thinning of the vaginal wall and subsequent greater susceptibility to tears and cuts (Altschuler & Rhee, 2015;Brooks, Buchacz, Gebo, & Mermin, 2012;Idso, 2009;Johnson, 2013), and the low rates of condom use of post-menopausal women who no longer fear unintended pregnancy (Altschuler & Rhee, 2015;Bateson et al., 2012;Idso, 2009;Johnson, 2013;Lindau et al., 2006). Older men can be reluctant or unable to use condoms as a result of erectile difficulties (Idso, 2009;Johnson, 2013), and older men who take erection-enhancing medications can face a higher likelihood of contracting an STI (Smith & Christakis, 2009). However, we know comparatively little about how older adults understand and define "safer sex," nor the contextual factors that shape and inform their use of safer sex practices and the importance of safer sex to them.
Older people grew up in a time when discussions of sex and sexual health were largely taboo, comprehensive sexuality education was generally not available (Cook, 2012;May, 2006;Pilcher, 2005), and STIs were highly stigmatised (though this is arguably still the case in many respects) (Altschuler & Rhee, 2015;Bourne & Minichiello, 2009;Grant & Ragsdale, 2008;Idso, 2009;Nash et al., 2015;Slinkard & Kazer, 2011). Additionally, the dominant sexual and gendered sexual scripts that older people grew up with may restrain their ability to openly negotiate condom use or other safer sex practices in new sexual relationships (Altschuler & Rhee, 2015;Bateson et al., 2012;Nash et al., 2015;Zablotsky & Kennedy, 2003). Further, safer-sex campaigns and policy are typically targeted towards younger people (Bateson et al., 2012;Bourne & Minichiello, 2009;European Expert Group on Sexuality Education, 2016;Gedin & Resnick, 2014;Kirkman, Kenny & Fox, 2013;Nash et al., 2015). Health care professionals are often reticent to discuss sex per se with older patients (Gott et al., 2004;Grant & Ragsdale, 2008;Nash et al., 2015;Nusbaum, Singh, & Pyles, 2004) and older people often wait for their health care provider to initiate discussions on sex (Lindau et al., 2006;Nash et al., 2015;Nusbaum et al., 2004;Slinkard & Kazer, 2011). These contextual factors may shape and limit the safer sex knowledge and practices of older people; however, further qualitative research is required to examine and confirm the extent to which this may occur (Bateson et al., 2012).
There is currently a lack of research, particularly qualitative, on older people's knowledge of safer sex and the safer sexual practices they engage in (Bateson et al., 2012).
Qualitative research is needed to provide a detailed understanding of the perspectives and decision making processes that older people engage in when having sex in circumstances that present a higher likelihood of STI transmission (Dalrymple et al., 2016). In particular, knowledge is currently lacking about the ways in which older people understand and define "safer sex," the importance they attach to safer sex in particular relationship contexts, the types of safer sex they use, and the potential barriers to using different types of safer sex. Our study, "Sex, Age and Me: a National Study of Sex and Relationships Among Australians Aged 60+" was established, in part, to examine these issues. Key aims of this exploratory project were to explore older adults' knowledge of, and use of STI prevention and safer sex.
The first Australian study of its kind, Sex, Age and Me collected quantitative and qualitative data (see Lyons et al., under review, for further details). With regard to the latter, 53 qualitative interviews with older men and women were conducted, and a subset of findings pertaining to interview participants' understandings and use of safer sex is explored in this article. The findings have important implications for informing strategies aimed at stemming the rise of STI rates amongst older cohorts, within policy, health services and health promotion.
---
Theoretical frameworka life course perspective
Our research is situated within a life course perspective, which suggests that older people's understandings and practices of safer sex are shaped by and "within the context of both generational time and historical time" (Ballard & Morris, 2003, p. 134). Safer sex practices are themselves historically and culturally situated, and vary over time and cultural context (Donovan, 2000a(Donovan, , 2000b)). Additionally, a life course approach recognises the diversity within lived experience, and that people of the same chronological age may have different experiences based on their particular social and cultural locations (Ballard & Morris, 2003).
Participants in our study belong to the "Baby Boomer" generation, and this likely shapes their current experiences and understandings of safer sex. The Baby Boomers are frequently credited with responsibility for leading the "sexual revolution" in the 1960s and 1970s across English-speaking Western countries, particularly the U.S., U.K., and Australia.
The sexual revolution critiqued and challenged dominant sexual norms of the time, although the extent to which it actually influenced the sexual lives and practices of our participants' generation is contested (e.g., Fileborn et al., 2015a). For instance, some participants in Fileborn et al.'s (2015a) Australian study commented that the sexual revolution had impacted more significantly on their children's sexual lives than their own, and that their own sexual practices had continued to be shaped by conservative sexual norms. The Baby Boomers are likewise often credited with challenging dominant norms around ageing "appropriately," and in refusing to perform "older age" in the same way as their parents, particularly when it comes to sex (Fileborn et al., 2015a). Again, while there is likely to be considerable variation in the ways that Baby Boomers are actually approaching older age, it is important to situate the findings of our research within particular historical and contemporary contexts.
---
Method
---
Participants
Fifty-three semi-structured individual interviews were conducted with Australian women (n = 23, 43.4%) and men (n = 30, 56.6%) aged 60 years and older from August 2015 to January 2016. Two female participants were aged in their mid-to-late 50s; these women were included in the study due to difficulties recruiting women for the interviews. Interview participants were recruited through the online survey conducted in phase one of the Sex, Age & Me study, which had attracted 2,137 participants from all major areas of Australia. Survey participants were recruited through a range of avenues, including an article published in The Conversation by two of the authors and subsequent media attention, age-targeted Facebook advertisements, through local and national ageing organisations, local governments, senior citizen clubs, and sexual health clinics. The survey sample was a convenience sample; however, we were able to target recruitment efforts towards specific key subgroups, and the sample was diverse, including participants from all major sociodemographic backgrounds and from all states and territories of Australia. Survey participants who were interested in taking part in a one-on-one interview were invited to provide their name and a contact email (these details were not stored with their survey responses). A total of 517 individuals expressed interest in taking part in an interview. Every third person who expressed interest was contacted, resulting in 175 individuals being contacted by email and provided with a participant information statement that explained the purpose of the study (to examine the sexual health, relationships, dating and sexual practices of older people, and knowledge of STIs), the general topics the interview would cover, what participation would involve, and the potential risks of taking part. These individuals were asked to contact the interviewer (Author 1) if they would like to participate. Of these 175 individuals contacted, 53 individuals from across Australia responded and agreed to take part. We did not recruit any more participants as data saturation was reached. An overview of the interview participants is provided in Table 1.
[Table 1: Sample profile of Sex, Age and Me interview participants (n = 53)]
Measures. The interview schedule focused on participants' understandings of sex and sexual satisfaction, the importance of sex and sexual satisfaction, their understandings and use of safer sex, their help seeking practices, and background demographic information. As the interviews took a semi-structured approach, additional lines of questioning were taken based upon the unique issues raised by each participant; however, the relevant questions from our interview schedule are included in Table 2.
Procedure. Interviews were conducted by phone (n = 41), Skype (n = 10), or face-toface (n = 2) depending on the participant's preference and geographical location. While conducting interviews via Skype is a relatively novel approach, research to date suggests that conducting interviews in this way (and via phone) does not negatively impact upon data quality, and in some contexts may even enhance it (Hanna, 2012;Holt, 2010;Sturges & Hanrahan, 2004). On average, the interviews took 30-60 minutes to complete, were audio-recorded with the participant's consent, and transcribed by a professional service. The transcripts were de-identified, and participants assigned pseudonyms. Ethics approval was received from the La Trobe University Human Research Ethics Committee prior to the commencement of the research.
[Table 2: Interview questions on safer sex] Analysis. The qualitative data were analysed using the software package NVivo, and followed a thematic analysis procedure outlined by Ezzy (2002) and Braun and Clarke (2006). The first-named author conducted the primary analysis. This process involved an initial close reading and preliminary coding of the transcripts. Notes were made identifying emerging themes, using the interview questions and core study aims (e.g., discourses on sex and relationships, understandings of safer sex) as initial code categories (i.e., a mix of inductive and deductive coding was used). In vivo codes were also identified throughout this process based on emergent themes and patterns within the data. This process was then repeated in NVivo, with the data sorted into code and sub-code categories. Particular attention was paid to the recurrent themes and patterns in the data, but also to cases that contradicted, complicated, or otherwise sat outside of the dominant thematic categories. This enabled us to account for the complexity and nuance in older people's experiences. A random sample of 10 interview transcripts was independently coded by the fifth-named author (WH) to ensure the validity of the coding, with both coders agreeing on the dominant thematic categories.
---
Results
---
What is safer sex?
Participants were asked about their understandings and definitions of the term "safer sex," and the types of safer sex they used. There were five main themes identified: using condoms, preventing STI transmission, discussing STI history, STI testing, monogamy, avoiding certain sexual practices, and self-care. Some participants indicated that they did not have safer sex, and we examine their reasons for this briefly. Many participants offered complex and multi-faceted definitions and practices of safer sex, and their practices tended to evolve over the course of a relationship, although there was variation between participants in this regard. For many participants, "safer sex" referred predominantly to condom use, and these terms were used synonymously at times. The issue of trust often permeated these practices.
---
Using condoms. Condoms were by far the most common element of participants'
discussions of what "safer sex" is. Given the centrality of condom use in STI prevention and sexual health campaigns, this is largely unsurprising. For example, Karen (64 yrs, heterosexual, single) said that condoms were "primarily what I think of when I think of safe sex." While condoms are promoted as a key safer sex strategy, they are not an infallible method, particularly when used incorrectly or in preventing all STIs. Only a small number of participants acknowledged the limitations of condom use as a safer sex strategy. Kane (63 yrs, heterosexual, in a relationship) noted that "condoms are ineffective against some kinds of infections," such as crabs (pubic lice)although it is notable that Kane learnt this only after embarking on some pre-interview research on Wikipedia "about STIs just in case you asked me." Another participant, Tim (62 yrs, gay, in a relationship), viewed condom use as one component of safer sex strategies. Tim offered a comprehensive and sophisticated definition of safer sex, saying "safer sex is lower risk activities…using condoms, minimising exchange of bodily fluids and skin contact." Tim also believed that as a gay male he had been exposed to considerably more public health campaigns and education on sexual health than heterosexual people in his age group would have, and this likely accounts for his knowledge of safer sex. Tim was particularly concerned about rising rates of syphilis infection within the gay community, and commented that "condoms can reduce the risk but...you can get syphilitic sores in the mouth or elsewhere in the body." This suggests that condoms may be seen as a safer sex strategy for certain types of sexual practices, with Tim's comments implying that condoms are not used for oral sex. As we discuss below, having a strong understanding of what constitutes safer sex did not always follow through to participants' use of safer sex. Both Tim and Kane acknowledged the limitations of all safer sex strategies, with Tim noting that these practices lower, rather than erase, the probability of STI transmission.
Condom use was strongly influenced by relationship context. Participants commonly discussed condoms as something that they used in new or casual sexual relationships. Gwen (65 yrs, heterosexual, single) said that she used "the old fashioned condom, particularly with anyone new." However, if these encounters progressed to a longer-term relationship Gwen would say to her partner "well let's go to the STD clinic and then we don't have to use condoms anymore, if we're both clear." A number of participants also discussed being strict with condom use with new sexual partners after either contracting an STI or being exposed to one in the past. For example, Martha (61 yrs, heterosexual, married) had a rule of "no condom, no sex" after she contracted genital warts from her first husband. Likewise, Rachel (64 yrs, heterosexual, in a relationship) insisted on using condoms with new partners after being exposed to hepatitis C, and only ceased using condoms with her current partner on the provision that they both have regular sexual health screenings. While participants such as Gwen and Rachel only phased out condoms after having STI tests, other participants viewed the use of condoms earlier in a relationship as "going through the motions." For instance, Beverly (66 yrs, heterosexual, single) described how she had new sexual partners use condoms early in their relationship:
But it was more just like a perfunctory thing…because you know they weren't going to use condoms the whole time and so it was just in the beginning until I knew that I wanted to stay with them and then it was okay for them to stop using condoms.
For Beverly, condom use was only seen as necessary while the relationship was in its formative stages. Progression to a more "serious" relationship rendered the use of condoms unnecessary; however, this decision was made in the absence of any STI testing or further discussion of sexual health. The cessation of condom use either with or without STI testing once a relationship became established appeared to be a common practice amongst our participants.
Preventing STI transmission. Some participants defined safer sex as being more generally about STI prevention. While condoms were often an important part of this, these participants tended to focus more strongly on the prevention of disease transmission, rather than the particular strategies that might be used to prevent this. One participant, Zane (80 yrs, bisexual, married/open relationship), defined safer sex as "preventing somebody else or any two people passing on something that they've acquired God knows where, to another partner." Another participant, Amelia (73 yrs, heterosexual, in a relationship), commented "safe sex these days is more about not getting STDs than anything else." Amelia's remarks suggest that meanings of safer sex shift temporally. Indeed, many participants commented that when they were younger the concept of "safer sex" generally referred to pregnancy, rather than STI, prevention. For heterosexual participants who viewed safer sex as predominantly related to pregnancy prevention, this could render safer sex as an irrelevant concept to them once they (or their partner) were no longer able to become pregnant.
Discussing STI history. Talking to a sexual partner about their STI and/or sexual history was another common component of safer sex. For some participants, this meant having an explicit conversation about their current STI status. For instance, Marty (77 yrs, heterosexual, in a relationship) said "if I had a conversation with somebody and was assured that they didn't have any sexually related diseases, then I'd probably feel fairly confident."
For some participants, discussions about sexual health with their partner formed a key aspect of safer sex. As highlighted above, this could involve talking about when they would cease using condoms in a relationship, and to arrange for STI tests prior to this. Some participants utilised discussions with partners (or potential partners) as a way to determine whether condoms or other safer sex measures were necessary. Rather than involving explicit discussions on STI testing and sexual health history, these discussions provided opportunities to make a series of judgements about a partner's character and the perceived likelihood that they would have an STI. For example, Ivy (62 yrs, heterosexual, single) commented that "it's a whole new world compared to when I was young," and that because of this she always raised the issue of safer sex with new partners. However, in determining whether or not she Discussing sexual health as a safer sex practice was often based on the premise that participants trusted a sexual partner to tell them if they had an STI, or trusted them not to have an STI. Trusting a partner's response appeared to absolve the need to use other types of safer sex such as condoms. As Kane said, "my preference is not to use a condom and if I'm attracted to a woman my inclination is to trust her, and one of the things I trust her to do is not to give me an STI." However, another participant, Dani (71 yrs, heterosexual, in a relationship), highlighted the limitations of trust as a safer sex practice in this regard saying, "they could even say they've had a sexual test and be lying about it, couldn't you? Unless you saw the piece of paper. Yeah, I think I would be wanting to use condoms." STI testing. STI testing was mentioned relatively infrequently by participants in their definitions of safer sex. Shane (72 yrs, heterosexual, married) said that he would want to know that a new sexual partner "had their sexual health checked and had the tests [to be]…reassured that they didn't have any sexual disease." However, Shane qualified this by suggesting that he would be more concerned if the new sexual partner was male, or if they had not come from a long-term monogamous relationship. Again, this suggests that safer sex practices are seen as context dependent, and as less relevant to those involved in monogamous heterosexual relationships.
Although only a small number of participants discussed STI tests in their definitions of safer sex, many more indicated that they had used STI testing as part of their safer sex practicesas the preceding discussion has illustrated. Some participantspredominantly womenreported that they insisted on their new partners taking STI tests before having unprotected sex (i.e., without a condom). For example, Tina (60 yrs, heterosexual, married) told her now husband, "either you're going to use condoms or we are all going to have the full suite of tests beforehand. He opted for the full suite of tests…we both had every test that you could possibly have." Wilma (61 yrs, heterosexual, widow) decided to have an STI test after being involved in a relationship with a man who she "wasn't totally trusting," although they had consistently used condoms. However, her doctor was disparaging of the need for her to be tested, saying he was sure Wilma would be fine. The tests only proceeded because of Wilma's insistence that "I really need to have one." A small number of participants also discussed using blood donations as a proxy for STI tests. For example, Kane (63 yrs, heterosexual, in a relationship) said when he was donating blood regularly "I was being tested…every fortnight, so I was pretty sure that I was clear." While blood donors in Australia are screened for blood borne viruses, they do not screen for all STIs, making this approach to testing limited and risky.
Another participant, Aaron (65 yrs, heterosexual, single), said that he also gives himself "a check regularly as well, so I'm modern in that thinking." Aaron's comments imply that, for some older people, STI tests may be viewed as irrelevant or only of concern to young people.
Gwen (65 yrs, heterosexual, single) saw the process of having an STI test and revealing the results to a new partner as developing "a whole higher level of trust between you…it actually brings you closer together I think." In this way, STI testing can be used as a mechanism for producing trust in a new relationship. Given the centrality of trust in safer sex, this has important implications for the framing of sexual health campaigns targeted towards older people.
Monogamy. Monogamy was often used as safer sex, both within the context of longterm monogamous relationships, and for those who were entering into new relationships with someone who was previously in a monogamous relationship. For example, Xavier (65 yrs, heterosexual, married) said that safer sex was not important in his relationship as he had been with his wife for 42 years, and "safe sex is something you do with people you don't know…If we had any STDs we would've known by now." Another participant, Carl (62 yrs, heterosexual), was involved in three simultaneous, "monogamous" relationships, which he believed protected him from STIs as he believed his three partners did not have other partners. Others were more cautious. For example, Leila (61 yrs, heterosexual, married) said that while "you can relax a little bit" in a long-term relationship, she would "still be very careful…you really never know someone, you just don't."
For those entering into new relationships, serial monogamy (or a relatively "inactive" sexual life) was seen as being protective against STIs. For instance, Dani (71 yrs, heterosexual, in a relationship) said that she did not have an STI test before having unprotected sex with her partner because "he wasn't having much sex, I don't think."
Likewise, Oliver (66 yrs, heterosexual, friends with benefits relationship) said that he "didn't even think about" the issue of safer sex with his partner, because she had not been in a sexual relationship for a very long time. However, monogamy does not always offer protection against STIs, as Elli (59 yrs, bisexual, single) discovered when she contracted herpes after having unprotected sex with someone who had just left a 30-year monogamous relationship.
Sexual practices. For a minority of participants, limiting their sexual practices to activities they viewed as lower risk was an important safer sex strategy. Notably, this strategy was mentioned by two male participants who had sex with men, who both discussed engaging in practices that presented a lower risk of HIV transmission. For example, Fred (60 yrs, bisexual, single/casual sexual relationships) did not have anal intercourse with his regular male sex partner, and said, "we don't do anything that is really hazardous in terms of HIV," though some of his sexual practices may expose him to other STIs. Likewise, Tim (62 yrs, gay, in a relationship) said for him safer sex might involve "lower risk" activities such as "kissing, mutual masturbation, digital stimulation and masturbation, anything that's essentially non-penetrative." Other participants indicated that they would simply not have sex with someone if they believed they might have an STI. As Dylan (65 yrs, heterosexual, longdistance relationship) noted, discussing the fallibility of condoms, "the only perfect one is to not do it, so if I'm worried I'll leave."
Not practicing safer sex. Finally, a few participants indicated that they did not have safer sex in their sexual relationships. For example, Beverly (66 yrs, heterosexual, single), who was casually dating, said: I pride myself in looking after myself my mental health and my physical health but when it comes to sexual health you know I've been a bit irresponsible really and it's hard for me to sort of own up to that.
Likewise, Carl (62 yrs, heterosexual, multiple relationships), who had multiple, simultaneous "monogamous" relationships said, "no way, I don't use condoms," while Kane (63 yrs, heterosexual, in a relationship) reported that "post-menopausal women are awfully cavalier" about condom use, so he had rarely used condoms throughout his multiple sexual relationships.
Self-care and well-being. Some participants provided definitions of "safer sex" that extended beyond the prevention of STI transmission to include emotional, psychological, and physical well-being and safety in an intimate relationship. This type of definition was well encapsulated in Rachel's (64 yrs, heterosexual, in a relationship) comment that safer sex is: About knowing yourself really well, and understanding all the emotional aspects around sex…understanding…the brain chemistry behind attachment, behind sexual attraction, behind being sexually active…having an understanding of how your thinking works, being a bit mindful about your thinking.
Another participant, Fred (60 yrs, bisexual, single/casual sexual relationships), highlighted an apparent paradox in the relationships between safer sex, caring for one's partner, and the role of trust and stigma relating to STIs. Fred noted that suggesting to a partner that they, for example, use a condom "has two contradictory effects. One is, 'I'm trying to look after you'. It's a positive message to the other person…But the other thing is 'I don't trust you and you shouldn't trust me'." This suggests that the emphasis on "trust" between sexual or romantic partners has the potential to hinder engagement in safer sex and self-care practices.
---
Importance of safer sex
Our discussion thus far has considered how older adults' understand and define the concept of "safer sex," and the safer sex strategies they used. We move on now to consider how important safer sex was to participants. The importance of safer sex seemed to be closely connected with relationship context and trust, perceived risk levels, and concern for personal and public health. These factors often co-informed one another, and were not mutually exclusive.
Concern for personal health. For some, safer sex was important due to a concern for their personal health and a desire to avoid any unpleasant symptoms. For example, Sally (71 yrs, heterosexual, widow), who had experienced extensive health problems relating to her reproductive system, said safer sex was highly important to her as "I don't need to get infected with anything, I've had enough problems in that area." A number of individuals who worked in health care settings indicated that safer sex was important to them after being exposed to the early stages of the HIV/AIDS epidemic through health promotion strategies, being employed in the healthcare sector, or having friends or family members diagnosed with HIV/AIDS. Igor (78 yrs, heterosexual, married) previously worked in a HIV clinic and as a result was "determined that I was never going to die of HIV, nor was I going to impose it on somebody else."
For others, avoiding STIs was a matter of "common sense." Juliet (69 yrs, heterosexual, in a relationship) said that although she did not view STIs as shameful, "if it's avoidable, it's just the most sensible thing." Others saw the prevention of STIs as a matter of personal responsibility and commitment to public health. For instance, Norman (69 yrs, heterosexual, married) said "it's obviously important to maintain a healthy population and not to spread disease by sexual means or any other if you can help it."
Stigma. The importance of safer sex was also linked to the stigma attached to STIs and having multiple sexual partners, and the feelings of shame this engendered. Safer sex was important to Ivy (62 yrs, heterosexual, single) because she believed that having an STI "at this age…would ruin any future dating life." However, Ivy did not distinguish between different types of STIs and it was therefore unclear whether she was referring to treatable, non-treatable STIs or both. Stigma played a somewhat paradoxical role here: it simultaneously increased the perceived importance of safer sex, while also contributing towards a culture in which having an STI is highly shameful and difficult to discuss due to a fear of being ostracised or rejected as a sexual partner. It is also apparent that for Ivy, the stigma or shame associated with having an STI would be further compounded by her age ("at my age").
Safer sex as less relevant in later life. A minority of participants reported becoming more pragmatic about safer sex in later life. For example, Marty (77 yrs, heterosexual, in a relationship) said he was less concerned about contracting STIs compared to when he was younger as he took the view that:
If I did get an STI I'd probably be able to get it cured fairly easily, and maybe it doesn't matter so much, and maybe even HIV would be less of a threat in that I don't have such a long life ahead that I'd have to live with it.
Marty's position in the life-course clearly influenced his views about sexual risk taking and living with disease. Others reported that safer sex was relatively unimportant to them because they did not think it related to older people. Amelia (73 yrs, heterosexual, in a relationship), for example, thought that safer sex "possibly wouldn't even enter most people's mind to even do," as most of her generation grew up in the pre-AIDS era where "safer sex" related primarily to pregnancy prevention. As a result, Amelia said that even when she was exposed to safer sex messages "you sort of think, well it doesn't apply to me; that applies to young people." Likewise, Karen (64 yrs, heterosexual, single) commented that many people in her age group would hold the view that "they're in the safe category, that STDs is only something that [happens to] younger people who have…more than one partner," as a result of social norms when she was growing up that STIs were only an issue for sexually "promiscuous," "bad," or "dirty" people.
Relationship context. The perceived importance of safer sex was also related to the relationship context. For instance, Amelia commented, "if you're in a more or less steady relationship and you trust the person you're with, it's not so important…it depends a lot on the relationship, what's safe and what's not." Likewise, many participants commented that safer sex would be important if they were to start a new relationship or dating casually should their current relationship end. Oliver (66 yrs, heterosexual, friends with benefits relationship) saw casual dating as being a particularly "high risk" time where safer sex would be more important, "while you're trying to find a more stable place to express your sexuality."
Interestingly, Elijah (63 yrs, heterosexual, single), who was a long-term client of a sex worker, also viewed trust and relationship length as essential to the importance he placed on using condoms. For instance, if a sex worker offered to have unprotected sex at an early encounter "well, obviously she is doing that with everyone" making it a higher-risk decision.
In contrast, Elijah said, "if it happens over a relationship period…you develop a trust." STI risk. The perceived risk or likelihood of contracting an STI also influenced the level of importance some participants placed on having safer sex. As noted above, many participants viewed monogamous, long-term relationships as "low risk"and in many respects this is a fair assessment, given that older people are indeed less likely to have an STI in comparison to their younger counterparts. Likewise, other participants made judgements about the perceived likelihood a partner had an STI based on their number of sexual partners or social standing. For some participants the perceived risk of contracting an STI was deemed to be low based on their past experiences. For example, Vaughn (71 yrs, heterosexual, in a relationship) reflected on how when he was young there was a "plague" of gonorrhoea.
Vaughn said that "at the time I was having about 10 different women in a month…and I only caught gonorrhoea once…and I went through hundreds of people, literally." For another participant, Fred (60 yrs, bisexual, single/casual sexual relationships), the risks presented by unprotected sex were also a component of sexual pleasure and excitement. Fred admitted that while he had "taken more risks than any rational person would," these risks were "part of the 'fun at the fair.' And when you get away with it and then you go 'wow! That was a rush'."
Dylan (65 yrs, heterosexual, long-distance relationship) believed that "the unsafe sex thing is a beat up in many of the same ways we beat up other safety things," and argued that the risks of unsafe sex were relatively trivial and easily addressed through medical treatment. Because of this, Dylan believed that safer sex was largely unnecessary.
---
Barriers to safer sex
Embarrassment. For some participants, negotiating safer sex with a partner was viewed as an embarrassing endeavour for a number of distinct reasons. Elli (59 yrs, bisexual, single), who had herpes, said that she felt daunted at the prospect of having to raise the issue of safer sex with any new sexual partners for the first time in her life. For Elli, this was daunting because of "the interference with spontaneity and just the embarrassment of having to tell somebody that I'm carrying the herpes virus, which just feels completely bizarre in terms of the amount of sex I've had." This embarrassment was linked to the stigma associated with having an STI, as well as the implications Elli believed this would have for her sexual reputation.
Embarrassment about using safer sex was also linked to the fact that for many older adults, safer sex has not been a core part of their sexual repertoire. This was expressed by Jack (64 yrs, heterosexual, married), who said "I think it may be a little more confronting and embarrassing for older people…young people would probably…do it as a normal course of events." Vicki (73 yrs, heterosexual, single) believed that many older men were not knowledgeable about condom use because they had never (or rarely) had to use condoms growing up. Vicki commented that it "takes a really confident man to say 'I don't really know how to do this,' especially in bed," suggesting that embarrassment about ineffective condom skills may form a barrier to some older men having safer sex A number of participants commented that older adults were still influenced by the social norms and taboos surrounding sex when they were growing up, where "frank and fearless communication wasn't a big part of it" (Marty, 77 yrs, heterosexual, in a relationship). The lingering effects of these attitudes made discussing safer sex challenging for some older adults. For instance, Rachel (64 yrs, heterosexual, in a relationship) commented that norms around sexual "promiscuity" meant that for some older women admitting to being sexually active by, for example, requesting an STI test could be "deeply humiliating." However, some participants challenged the notion that embarrassment about safer sex was age-specific. Instead, embarrassment about sex was viewed as related to individual proclivity or personality traits, but, as Leila (61 yrs, heterosexual, married) argued, "that can apply at any age." Erectile difficulties. Erectile difficulties were a significant barrier to many men in using condoms as a form of safer sex. Participants with erectile difficulties frequently commented that using a condom would cause them to lose their erectionor they were unable to successfully put a condom on due to an insufficient erection. This suggests that safer sex education for older adults must extend beyond simply encouraging condom use, as for many this was simply not an acceptable avenue of protection. This also points to the importance of decoupling the often synonymous use of condoms with safer sex. Such thinking can limit the identification of alternatives to condom use that may reduce the risk of STI transmission. If safer sex is linked solely to condom use, then in the event that condoms can no longer be used, having safer sex becomes impossible.
Lack of skills, experience, and safer sex culture. As many within the current older cohort did not receive comprehensive sexuality education when growing up, a lack of knowledge regarding STIs and safer sex practices was raised by participants as a major barrier to having safer sex. This was particularly the case for those who had been in longterm, monogamous relationships who had had no perceived need for safer sex other than to prevent pregnancy. For example, Elli (59 yrs, bisexual, single) said that safer sex is "just not part of the frame of reference with a lot of people over 55." Similarly, Edwin (66 yrs, heterosexual, married) commented that "our age group aren't equipped, we don't have the culture…for dealing" with safer sex. As a result, some older people may be lacking the knowledge, skills, and awareness to have safer sex.
The assumption that "you know everything because you've reached this age" (Wilma, 61 yrs, heterosexual, widow) or that you should "know better" as an adult could also function as a major barrier to seeking out information on safer sex. Wilma commented that some people may "feel humiliated and they don't want to ask those questions of the doctor" due to the perception that they should already know about safer sex. This assumption and stigma around a lack of knowledge was actively perpetuated by some participants. For example, Dan (63 yrs, heterosexual, married) said that "by the time you get to our age you've been around the block once or twice so you'd be pretty stupid if you didn't know what it was all about."
Another participant Gwen (65 yrs, heterosexual, single) believed that while older people did know about safer sex and had learnt about it when they were young, this knowledge was not being reinforced as they got older. Certainly, Gwen's comments reflect current evidence that safer sex education is targeted almost exclusively towards younger people (Kirkman et al., 2013).
Stigma. The continued stigma surrounding STIs figured as a barrier for some participants in using or negotiating safer sex. While stigma around STIs is also an issue for young people this may be heightened for older people given the conservative norms governing sex when they were growing up. A number of women recounted stories where a male partner had been "insulted" after they asked them about their STI history. Vicki (73 yrs, heterosexual, single) believed this was because "in the old days…it was prostitutes and…loose women" who used condoms. Indeed, Rachel (64 yrs, heterosexual, in a relationship) shared an experience of a sexual partner refusing to have sex with her after she asked him to use a condom, saying to her "what sort of woman carries a condom…obviously you sleep around with everyone." Fred (60 yrs, bisexual, single/casual sexual partners) also indicated that this association could make discussing condom use "awkward" because it was akin to saying "'well, you must be promiscuous,' and that's not something most women want to think about themselves." It is notable that it was only female participants who reported feeling judged, and only women who were seen to be viewed negatively for raising the issue of safer sex (see also Dalrymple et al., 2016). This suggests that the barriers to using safer sex in later life operate in highly gendered ways.
Reduced pleasure. As has been well documented in the literature on safer sex (e.g., Crosby, Yarber, Sanders & Graham, 2005;Higgins & Wang, 2015), the belief or experience that condoms reduce sexual pleasure was a disincentive to using condoms. Jack (64 yrs, heterosexual, married), for example, said that he "enjoy[s] sex more without a condom."
Gwen (65 yrs, heterosexual, single) commented that it could be difficult to negotiate condom use with men who believed that condoms decrease or remove their sexual pleasure, "the usual classic complaint from men." While this is a common barrier to using condoms across all age groups, some male participants indicated that the impact on sexual pleasure was heightened in older age. For instance, Vaughn (71 yrs, heterosexual, in a relationship) said that "we're certainly not as sensitive as we were, so wearing a condom tends to make things very insensitive," and this could make it difficult to achieve orgasm. Vicki (73 yrs, heterosexual, single) believed that older men were "remembering using the old type of condoms, which…were thicker." As a result, older men's experiences of using condoms was potentially "more unpleasant than it needs to be now," suggesting that overcoming past experiences or assumptions in condom design may be needed to increase willingness to use condoms.
For women who experienced vaginal dryness after menopause or due to various health conditions, condom use could be painful, although younger women have also reported experiencing vaginal irritation as a result of condom use (Crosby et al., 2005). While Sally (71 yrs, heterosexual, widow) acknowledged that use of lubrication could help with this, she doubted "whether I'd find anybody that would be willing to go through all the preparations necessary…it wouldn't be spontaneous." This suggests that while the physiological issue of vaginal dryness can make condom use difficult, beliefs around how sex "should" occurin this case, as a spontaneous, "natural" process without interruption or the use of sexual aidsalso act as a barrier to engaging in practices (such as using lubricant) that would facilitate condom use (see also Diekman, McDonald & Gardner, 2000).
---
Discussion
While participants in this study discussed a broad range of safer sex practices, there was a strong emphasis on the use of condoms in comparison to other forms of safer sex (such as STI testing or engaging in lower risk, non-penetrative sex)although data from the Second Australian Study of Health and Relationships (ASHR2) suggests that condoms are not commonly used by older Australians (de Visser et al., 2014). Likewise, while practices such as discussing sexual health and history with a partner were raised, this was often presented as a strategy for making a value judgement on the perceived likelihood that a partner would have an STI. This echoes the findings of Hillier, Harrison and Warr's (1998) earlier research with Australian high school students, who likewise reported that condom use was virtually synonymous with safer sex, while trusting a partner and informally discussing sexual history were key safer sex strategies.
There was great variation regarding the extent to which safer sex was important to participants, and this was strongly mediated by relationship context. For many, safer sex was seen as relevant to new, casual relationships, and in contexts where a sexual partner was not "trusted," extending the findings of Dalrymple et al.'s (2016) research with late middle-age adults in the U.K. While the overall themes identified here are in many ways similar to studies conducted with younger age groups, the context and the ways in which these themes play out in the lives of older people is distinct and shaped by the interplay of ageism, cohort norms regarding sex, and more general stigma around STIs and sex.
When it came to having safer sex, there was again much variation. While some participants placed great importance on using condoms and having STI tests with new partners, for others having safer sex was often context dependent and based upon assumptions about their partner's sexual health status. Trust was fundamental in shaping safer sex, with condoms or STI tests seen as unnecessary with a trusted partner. This echoes the findings of research undertaken with younger samples (e.g., Crosby et al., 2013;Hillier et al., 1998). For example, Crosby et al. (2013) reported that women in their sample aged 25 and older were more likely to believe that condom use signified a lack of trust in one's partner compared to their younger counterparts. Notably, while many of our participants were in what might be considered "low risk" (long-term, monogamous) relationships and were unlikely to contract an STI, even those engaging in comparatively "higher risk" sexual relationships did not necessarily view safer sex as relevant to them. Implicit in these attitudes was the assumption that STIs are visible to the naked eye, and that you can tell if someone has an STI. This reflects the findings of research with younger cohorts (e.g., Barth, Cook, Downs, Switzer & Fischhoff, 2002).
The absence of sex education and a perceived lack of widespread condom use while growing up also meant that some older people may lack the knowledge, skills and cultural/social norms to have safer sexand this is more unique to older adults. It was apparent that norms and beliefs about safer sex from when participants were growing up continued to shape the understandings and practices of at least some older people. Some participants implied they were able to predict whether a partner had an STI based on their character and/or perceived number of sexual partners, and a number of female participants had experienced hostile responses from male partners after asking them to use condoms. The embarrassment and stigma associated with STIs and sex continued to act as a major barrier to discussing safer sex with a partner or healthcare provider, and this reflects the findings of research with younger cohorts (e.g., Barth et al., 2002;Hood & Friedman, 2011), though the impact of stigma plays out in different ways for older people. For instance, the stigma of having an STI may be compounded by the widespread cultural assumption that older people do not, or should not, have sex.
Our findings have important implications for policy, practice, and sexual health promotion initiatives aimed at reducing STIs amongst older cohorts. There is a clear need to challenge gendered norms and stigma about safer sex and "promiscuity" held by some members of older cohorts. The belief that only "promiscuous" or "dirty" people have safer sex functioned as a major barrier to having safer sex (and particularly condom use), hindered the ability to negotiate safer sex (particularly for older women), and meant that many older men and women did not view themselves or their partners as "at risk" of, or likely to have, an STI. In many respects, these findings are similar to those from research with younger adults (e.g., Barth et al., 2002;Hillier et al.,1998). Sexual health promotion strategies must clearly communicate that older people are sexually active, susceptible to STIs, that safer sex practices are relevant to older people, and that STIs are a normal (though not completely inevitable) aspect of sexual activity. Relatedly, campaigns must seek to disrupt dominant sexual scripts that hinder safer sex. The notion that sex should be "spontaneous" and "natural," without interruption or discussion of any kind, could act as a barrier to discussing or having safer sex (see also Diekman et al., 2000;Galligan & Terry, 1993 for similar findings with younger samples)not to mention discussion of other components of sexual health and wellbeing, such as consent and the negotiation of pleasure (Dune & Shuttleworth, 2009). Such actions may help to shift safer sex cultures amongst older cohorts in a way that facilitates their use.
Vitally, the promotion of safer sex must move beyond a sole focus on condom use to include a multi-faceted and holistic approach to sexual health promotion. For many individuals in this study, condom use was not appropriate due to erectile difficulties or other health issues. This represents a unique challenge for promoting condom use amongst older age groups (Schick et al., 2010), although other studies have also indicated that erectile difficulties can influence correct condom use amongst younger men (Crosby, Sanders, Yarber, Graham & Dodge, 2002;Graham et al., 2006;Sanders, Hill, Crosby & Janssen, 2014). While awareness of condoms as a safer sex strategy was high, there was considerably less discussion on STI testing (see also Hillier et al., 1998), and this coheres with findings from ASHR2 suggesting that participants aged 60-69 were the least likely to have had an STI test in the past 12 months. Regular STI testingparticularly for those with new or multiple sexual partnersrepresents a more accessible form of safer sex for those who are unable to regularly use condoms. Public health campaigns targeted towards older people could also include guidance on successfully putting a condom on a semi-erect penis. Likewise, efforts to normalise the use of lubricant during penetrative sex may also be of benefit, particularly given that some participants viewed lubricant use as disrupting the "natural" flow of sex.
It is also notable thatwith the exception of two male participants who had sex with other menparticipants did not discuss engaging in sexual practices that presented lower risk of disease transmission as a form of safer sex (see also Hillier et al., 1998). It is unclear whether participants did not recognise this as a safer sex strategy (an issue that could be addressed through public health and educational campaigns), or whether heterosexual participants adhered to the idea that penetrative, penis-in-vagina intercourse constitutes "real" sex. Recent qualitative Australian research illustrates that older women hold diverse views of what "counts" as sex, though some still privileged penetrative heterosexual intercourse as "real" sex (Fileborn et al., 2015a;Fileborn et al, 2015b). Participants in the present study held similarly diverse views of what sex "is." Nonetheless, adherence to the view that penetrative intercourse constitutes real sex may prevent older people from engaging in lower-risk sexual practices as a form of safer sex, and suggests a need to continue to challenge and disrupt social and cultural norms that privilege penetrative intercourse.
Additionally, it is important to note that many participants' understandings of "safer sex" were, in some respects, quite narrow. While there was a strong focus on STI prevention, issues such as sexual consent, wellbeing, and ethics were raised by only a small number of participants, despite these being key components of the World Health Organization (WHO) definition of sexual health (WHO, 2006). Given that our participants grew up in a context of limited sexuality education, it is possible that this continues to shape their current understandings of safer sexand this highlights the importance of situating safer sex within a life course perspective. Our findings suggest that sexual health campaigns for older people may also need to address broader issues such as those identified above, though this warrants further investigation. Some participants reported negative or dismissive experiences with healthcare providers after requesting an STI test. As previous research has illustrated, healthcare providers are often reluctant to address issues of sexual health with older patients (Gott et al., 2004;Kirkman et al., 2013). Our findings indicate the need for training and education for healthcare providers regarding sexual health in later life. There is a clear role here for healthcare providers to initiate discussions with older patients regarding sexual health, and to be receptive to this issue when raised by patients.
Educational and other efforts targeted towards older people may also benefit from taking into account the major barriers and facilitators to safer sex reported by participants.
Trust was essential to participants' understandings of safer sex, and the importance and use of safer sex. Having trust typically meant that there was no perceived need to have safer sex.
However, as one of our participants suggested, having STI tests and discussing safer sex could in fact build trust between partners. By reframing safer sex as being fundamentally about trust and trust building, this may encourage older people to have safer sex. It is also important to challenge the notion that monogamy offers protection against STIs, and to encourage older people to have an STI test or to use other forms of safer sex with all new partners.
As concern for personal health and well-being facilitated the use of safer sex, this could also underpin educational campaigns. For example, concern for health could be utilised to encourage older people to have an STI test, though such campaigns should be targeted towards older people in "high risk" groups for STIs, given the generally low likelihood of older people contracting STIs overall. Given that some participants reported that they did not see information about safer sex as relevant to them (see also Dalrymple et al., 2016), it is important that safer sex campaigns or educational resources be clearly targeted towards older people, or at least be inclusive of older populations. Any targeted resources may need to cover the "basics" of condom use and other safer sex practices, and provide a discrete and non-judgemental source of information. Such resources should also cover issues specific to older cohorts. For instance, this may include information on how condom design has changed over time to enhance sexual pleasure.
---
Limitations
There were some limitations of this study. As a qualitative study we were concerned with generating an in-depth exploration of participants' understandings and practices, and the findings presented here are not generalizable. The participants were generally highly educated, articulate, comfortable discussing sex, and from an Anglo-Saxon cultural background. Future research is necessary to identify any differences in attitudes and practices in more diverse demographic groups. Likewise, the majority of our participants identified as heterosexual, and the experiences of sexuality and gender-diverse older people require further examination. While participants were asked to respond to open-ended questions about what safer sex "is," and the safer sex practices they engaged in, the broader project from which this data stems (and particularly the online survey component of this project) had a strong focus on STIs and STI prevention. It is possible that this shaped our participants' definitions of safer sex and the types of safer sex they discussed. It is notable, for example, that participant discussions of safer sex focused almost exclusively on STI prevention as opposed to more holistic definitions inclusive of issues such as sexual consent and sexual pleasure.
---
Conclusion
Giving consideration to the sexual health of older people is becoming increasingly important, particularly with an ageing population where older people are remaining sexually active for longer and experiencing an increase in STI rates. This studythe first of its kind in Australia, and one of only a handful internationallyhas provided important insight into the complexities and nuances of older peoples' understandings of safer sex and their safer sex practices. Our findings point to a considerable degree of variation in practice and knowledge.
Likewise, while there is some similarity in understandings and use of safer sex with young age groups, our findings suggest that there are unique contextual factors and implications for older people. The continued influence of a range of myths and misconceptions about safer sex and STIs was also apparent. Importantly, these findings present valuable insight into the
---
ways in which we may begin to initiate change to help improve and support the sexual health and well-being of older populations. | 60,747 | 1,313 |
5e9977ef4588b70b120b4a5becef282b903b1517 | Civic Ecology Uplifts Low-Income Communities, Improves Ecosystem Services and Well-Being, and Strengthens Social Cohesion | 2,021 | [
"JournalArticle",
"Review"
] | Ecosystem services enhance well-being and the livelihoods of disadvantaged communities. Civic ecology can enhance social-ecological systems; however, their contributions to ecosystem services are rarely measured. We analysed the outcomes of civic ecology interventions undertaken in Durban, South Africa, as part of the Wise Wayz Water Care programme (the case study). Using mixed methods (household and beneficiary (community members implementing interventions) surveys, interviews, field observations, and workshops), we identified ecosystem service use and values, as well as the benefits of six interventions (solid waste management and removal from aquatic and terrestrial areas, recycling, invasive alien plant control, river water quality monitoring, vegetable production, and community engagement). Ecosystem services were widely used for agriculture, subsistence, and cultural uses. River water was used for crop irrigation, livestock, and recreation. Respondents noted numerous improvements to natural habitats: decrease in invasive alien plants, less pollution, improved condition of wetlands, and increased production of diverse vegetables. Improved habitats were linked to enhanced ecosystem services: clean water, agricultural production, harvesting of wood, and increased cultural and spiritual activities. Key social benefits were increased social cohesion, education, and new business opportunities. We highlight that local communities can leverage natural capital for well-being and encourage policy support of civic ecology initiatives. | Introduction
The magnitude of human activities has pushed us into the epoch of the Anthropocene, where we risk crossing planetary boundaries that would cause catastrophic and irreversible environmental changes, with negative consequences for human well-being [1]. It is predicted that anthropogenic environmental pressures will intensify in the future, resulting in further environmental degradation, climate change, and pollution, and impacting on the ability of natural capital to provide ecosystem services [1][2][3]. Ecosystems and their services, or "nature's contributions to people (NCP)" [4], are essential to support human well-being and development [2]. It is understood that natural capital underpins social, human, and built capita, and the interaction between these various forms of capital will determine the levels of well-being that humans could achieve in a particular context through, for example, ecosystem services [5]. Ecosystems and people are interdependent and intertwined through the concept of social-ecological systems.
Social-ecological systems research looks at the reciprocal interactions between people and nature at various temporal and spatial scales [6]. Knowledge of social, ecological, and other components in a system, and on the use and benefit of ecosystem services, is needed in order to derive maximum benefit from interactions in a system. Social-ecological systems provide a basis for understanding the interlinked dynamics of environmental and societal change [6]. Since human activities are the major drivers in social-ecological systems, whereby they can either diminish or enhance ecosystem services and well-being [7], societal change would be essential to ensure ecosystem service protection and sustainability [8].
To foster societal change towards support for environmental management, we need an understanding of how biodiversity and ecosystem services are perceived by humans. Such perceptions would include the way in which humans observe, value, understand, and interpret biodiversity and ecosystem services [9].
Demands for ecosystem services are increased with increasing populations in cities [10], particularly in cities of the global south, that have added pressures of poverty, and direct dependence on ecosystem services for livelihoods and well-being of the poor [11,12]. Ecosystem services provide the foundation for economic opportunities to empower the disadvantaged [2]. The disruption of social-ecological linkages can have detrimental effects on communities, particularly when access to ecosystem services are denied [13], or when ecosystem disservices, such as floods or invasive species, are experienced. This raises the importance of understanding and strengthening social-ecological linkages, while ensuring that ecosystem services are managed appropriately, particularly in disadvantaged communities.
Civic ecology initiatives, or "community-based conservation", aim to provide diverse environmental and socio-economic benefits through people-centred participatory approaches [14]. Civic ecology practices include environmental stewardship actions that enhance natural capital, ecosystem services, and human well-being, in social-ecological landscapes, such as cities [7]. While civic ecology practices are increasing and contributing to global sustainability initiatives, their contributions to ecosystem services are rarely measured [7].
In this study, we examined the understanding, use, and values of ecosystems and their services with regards to two low-income local communities, one peri-urban/rural and one urban, where some community members are implementing civic ecology initiatives. As a case study, we used the private sector-funded Wise Wayz Water Care (WWWC) programme, being implemented along the Golokodo and Mbokodweni Rivers, within Durban, South Africa (Figure 1). Using a mixed methods approach (household surveys, interviews, field observations, workshops), we investigated the following questions: (1) What are the values and perceptions held by the beneficiaries (people from the community working as part of the WWWC civic ecology programme), and the broader community, related to the WWWC civic ecology programme? (2) What are the various benefits of civic ecology practices to the social-ecological system of disadvantaged communities, particularly with respect to ecosystem services? (3) How do ecosystem services uses and values differ between the beneficiaries and the broader community? In answering these questions, we explored how increased knowledge of ecosystems through civic ecology practices in social-ecological systems contribute to the protection and increased use and benefit of ecosystem services, both for beneficiaries and other members of disadvantaged communities.
---
Materials and Methods
---
Study Area
---
Socio-Economic Characteristic
The WWWC work area, the study area (Figure 1), is situated in two peri-urban communities, Folweni and Ezimbokodweni, located in Durban, in the province of KwaZulu-Natal, South Africa. Both fall within the eThekwini Metro Municipal boundary. Folweni is more urban and is administered by eThekwini Municipality, while Ezimbokodweni is more peri-urban/rural and is jointly administered by eThekwini Municipality and Ingonyama Trust Board (traditional authority of communally owned rural lands). The study area is characterised as one of the poorest in Durban, with low education, employment, and income levels. In Folweni, 17% have no source of income and 37% earn less than ZAR 1600 (USD 99.60 @ USD 1/ZAR 16.06) per month, 35% have secondary education, only 6% have higher education, 53% of households have piped water inside the dwelling, 42% have flush toilets connected to a sewer, and 47% of households are headed by females [15]. Similarly, in Ezimbokodweni, 20% have no source of income, a third of the population earn less than ZAR 1600 per month, 30% have completed secondary education, only 2.8% have higher education, 10.7% households have piped water inside the dwelling, 4% have a flush toilet connected to a sewer, and 40% of households are headed by females [15].
Sewage infrastructure in the Folweni area is poorly maintained; most of Ezombokodweni utilises informal pit latrines, and is not serviced by waterborne sewer systems, with sewerage being noticed to surcharge into water courses in both areas [16]. A small number of households in Ezimbokodweni are located within the 1:100 floodplain of the Mbokodweni River. Solid waste is a problem, and smaller streams have become blocked by solid waste, invasive alien plants, and illegal sand mining, resulting in stagnant water that exposes the community to various water borne diseases [17]. Issues in the broader area, as noted in the Local Area Plan, include sanitation being a major problem (with failing and unhygienic ventilated improved pit latrines), lack of recreational facilities and meeting venues, lack of tertiary educational facilities, and poor/lack of housing facilities [18].
---
Bio-Physical Characteristics
The climatic condition of the study area is moderate, situated in a coastal climatic zone, with mean annual temperatures of between 18.5 and 22 • C and a mean annual rainfall ranging between 820 and 1423 mm. The study site is traversed by the Mbokodweni and Golokodo rivers, which fall within the U60E quaternary catchment and the North Eastern Coastal Belt aquatic ecoregion [19]. Numerous wetlands and drainage lines are present along the rivers (Figure 1). River flows, widths, and depths vary across the study area, and between wet and dry seasons. Sites along the Golokodo River are up to 10m wide and 1m deep, and flows range from slow, to moderate, to fast. River substrates include sand and bedrock. Along the Mbokodweni River, widths and depths range from 3 to 20 m and 0.5 to 2 m, respectively, with moderate to fast flows. The dominant substrate is sand, bedrock, and cobble [17].
Results from biological monitoring of Durban's aquatic systems revealed that 71 of the 175 sites are considered to be in a poor state, and only 3 sites are in a near natural state [20]. Impacts on rivers include illegal spills and discharges, solid waste dumping, sand mining, poor operation of wastewater treatment works, realignment of watercourses, flow reduction, removal of riparian flora, and infestation by invasive alien plants [20]. The rivers in the study area are similarly classified as being impacted by solid waste pollution, bank and channel modification, and invasive alien plant invasion [17,21].
All of the sites are found in the KwaZulu-Natal Coastal Belt vegetation type, within the Indian Ocean Coastal Belt Bioregion [22]. This vegetation type is classed as endangered. Vegetation of significance is situated on settled areas, and along riverbanks, characterised by small valley forests and bushes. In the broader study area, vegetation included small patches of grasslands, many of which have been degraded due to settlement and subsistence farming activities [23].
The site is traversed by the Durban Metropolitan Open Space System (D'MOSS), and parts of the site are classified as Critical Biodiversity Areas [23]. D'MOSS is a formal municipal planning policy instrument that identifies a series of interconnected open spaces that incorporate areas of high biodiversity value and natural areas [20], with the purpose of protecting the globally significant biodiversity (located within the Maputo-Pondoland Biodiversity Hotspot) and ecosystem services within the city [24,25].
---
Case Study: Wise Ways Water Care Programme
The Wise Wayz Water Care (WWWC) programme commenced in 2016 and brought together community members from Folweni and Ezimbokodweni (the "beneficiaries"), who were previously working as separate volunteer groups, mainly performing litter removal along the Mbokodweni and Golokodo river systems. Under WWWC, the beneficiaries are working and learning together, working towards improving the socio-economic and environmental conditions of their communities through the implementation of various environmental management interventions. This work was stimulated by flooding that damaged houses in the lower lying areas during a heavy rainfall event that occurred in 2016. The flooding was exacerbated by solid waste and alien vegetation blockages in the river systems, which resulted in flow and channel blockages that caused localised flooding. The beneficiaries (N = 130) include males (N = 41) and females (N = 87), with various levels of education, ranging from Grade 1 (lowest level of primary education) to Grade 12 (highest level of secondary education), with 1 person having tertiary education.
The WWWC programme is managed by a non-profit organisation, i4WATER, through funding provided by a business operating in the Mbokodweni Catchment, and located in the Umbogintwini Industrial Complex (Figure 1), the African Explosives and Chemical Industry (AECI) Community Education and Development Trust, since 2016. The objectives of the WWWC programme include improving the environmental health of the lower Mbokodweni Catchment (the study area) and supporting sustainable livelihoods of beneficiaries as well as the greater community through training and skills development, alongside small enterprise development. Beneficiary training included invasive alien plant (IAP) identification, removal, and control; poultry and vegetable production (fertilisation, disease, and pest control; irrigation, harvesting, and marketing); environmental and aquatic management and monitoring (e.g., use of water-related citizen science tools, i.e., miniSASS, clarity tube, Escherichia coli (E. coli) swab); health and safety training; and community education and engagement.
The beneficiaries of the WWWC programme implemented six environmental management interventions within natural areas in and around Ezombokodweni and Folweni, namely, (1) Solid waste management and removal: removal of waste from aquatic and terrestrial areas; (2) Recycling: waste collection and storage for recycling; (3) Invasive alien plant control: identification and control of invasive alien plants along rivers and streams; (4) Water quality monitoring: monthly biophysical monitoring of river water quality; (5) Community vegetable gardens: vegetable production (two gardens) using permaculture methods; (6) Community engagement: door-to-door community engagement, surveys, and knowledge sharing. Interventions were identified by beneficiaries in response to related challenges faced in the community, and were implemented with support from business funding, within the lower Mbokodweni catchment, at 20 sites, within Folweni (11) and Ezomkodweni ( 9), along various rivers, tributaries, wetlands, and open areas (Figure 1).
Interventions considered in this study were undertaken over a 3-year period from 2016 to 2018. The removal of solid waste from the rivers took place 4 days per week by 45 team members who managed to collect an average of 1.1 tons of solid waste per month. The recycling team collected and separated the recyclable waste from the collected solid waste, which amounted to approximately 0.48 tons of recyclable waste per month. The community engagement and education team, of 44 members, visited homes in their areas 3 times per week to discuss the various socio-economic and environmental issues that the community is facing. The team also provided information and education to the homes they visited on how to address some of the challenges. The invasive alien plant clearing teams worked along 6.8 km of rivers, as well as in wetlands, to remove invasive alien plants. The team cleared 40 ha using mechanical methods. Species cleared included up to 28 species categorised as invasive in South Africa, primarily Diplocyclos palmatus, Canna indica, Arunda donax, Lantana camara, Melia azerdarach, Tithonia diversifolia, and Ricinus communis. The aquatic monitoring team conducted assessments at 22 sites on a monthly basis, analysed and interpreted the data collected, and used the findings to address the challenges undermining the river health. In the 2 community vegetable gardens, 28 team members worked daily to plant a variety of vegetables and herbs, including spinach, tomatoes, carrots, cabbage, kale, beetroot, and lettuce.
---
Identifying Values and Perceptions of the WWWC Programme 2.3.1. Focus Group Meetings, Workshops, and Interviews
In order to obtain more details on the operational aspects of the interventions, and to ascertain personal perceptions on the programme, we conducted focus group meetings with the WWWC implementers, i4Water, and 1 AECI representative, which involved open discussions of the WWWC programme. We also hosted 2 workshops with 20 and 60 WWWC beneficiaries. During the first workshop, beneficiaries were asked to participate in various individual and group activities in order to (1) identify the positive and negative events or aspects of the WWWC project; (2) identify strengths, weaknesses, opportunities, and threats related to the WWWC programme; and (3) note any changes in the community and biophysical environment that occurred due to the WWWC programme. Personal interviews were held with 9 beneficiaries and 1 coordinator from the programme funding institution in order to obtain greater insight into the WWWC programme, personal experiences, and the manner in which the programme had changed individuals' lives, including contributions to their livelihoods, sense of place, and health.
---
Surveys
We conducted surveys (N = 3) with beneficiary, community, and external stakeholders (including the WWWC funders, AECI, and government stakeholders (eThekwini Municipality), as well as the South African National Biodiversity Institute (SANBI) (Data S1), in order to identify individual understanding and perceptions of the WWWC programme and associated benefits to the community and beneficiaries, as well as the environment and ES use, and also to gather data on the social, ecological, and economic attributes of the study area [26]. These surveys also collected socio-economic and health data of participants. Open-ended questions were designed to extract perceptions of the value of the programme to the social-ecological-system of the study area. The three surveys were (1) beneficiaries survey, (2) community survey, and (3) key stakeholder online survey. Beneficiary surveys were conducted in a workshop setting (N = 60), community surveys were conducted at random households along the Mbokodweni and Golokodo rivers (N = 60), and key stakeholder online surveys were conducted via Survey Monkey (N = 6). The beneficiary and community questionnaires were translated into IsiZulu, and participants were allowed to choose the language of their preference to complete the questionnaires. Informed consent to utilise the outcomes of the study for research purposes was obtained from all participants, as required by the Ethical Approval. Data collected via the surveys were analysed using Statistical Package for Social Sciences (SPSS) 25. This study is limited in that surveys were only conducted after interventions were implemented.
---
Site Visits
The authors conducted site visits to Folweni, Ezimbokodweni, and selected WWWC work sites to identify the general living conditions of the community in the study areas (housing, water supply, waste management, etc.), and the biophysical condition of the areas where the WWWC interventions were implemented (wetlands and rivers, open spaces, etc.). Direct field observations were made, and photographs were taken for record purposes. We held on-site discussions with i4WATER and beneficiaries from each of the intervention teams. These visits were done to gain a deeper contextual understanding and gather firsthand data on the interventions and their impacts on site.
---
Social-Ecological System Workshops with Beneficiaries
In order to better understand the social-ecological system of the study area, we hosted the second workshop with WWWC beneficiaries (N = 60), who were randomly selected from the list of beneficiaries. We used A0 size maps as the focus of discussions, which showed the locations of WWWC work areas (WWWC programme boundary and locations of management intervention sites, e.g., water quality monitoring points, and solid waste removal sites). Maps were drawn using ArcGIS 10.4, showing the WWWC work sites relative to other landscape attributes and ecological habitats, namely, the D'MOSS, including wetlands, rivers, and vegetation habitats. Beneficiaries reflected on the maps and related their experiences in the study area. Key questions that were explored in the workshop related to existing or perceived understandings of (1) opportunities related to social activity, knowledge sharing, and natural resource use (e.g., water extraction, livestock grazing, and watering); (2) potential expansion of WWWC work areas; and (3) threats relating to health and safety, such as sources of pollution and illegal dumping of solid waste.
---
Identifying Ecosystem Services Used and Valued
Ecosystem services were identified from survey responses on the basis of the existing use or demand for that service. Surveys (as described above) were used to collect data on ecosystem service usage by (access), and values of, beneficiaries and community members. The ecosystem services included in the survey were (1) River water use: use of natural water from river or stream (e.g., for washing clothes or cars, or for general household use);
(2) Natural material harvesting: gathering natural materials for various uses, e.g., medicinal plants or wood; (3) Subsistence use: direct use of natural resources to sustain life, e.g., food or water; (4) Agricultural use: crop or livestock production; (5) Cultural practices: use of natural areas for cultural practices or rituals; and (6) Recreation and leisure: use of natural areas for leisure or outdoor activities.
---
Results
---
Perceived Ecological, Health, Safety, and Socio-Economic Benefits from Civic Ecology Interventions
Both the beneficiaries (from survey and workshops) and the broader community (from household surveys) reported positive changes in the community after civic ecology interventions had been implemented (Figure 2). These were in the observation that the area and stream were cleaner, but also indirect benefits such as improved education and less danger. Beneficiaries also identified the benefit of improved health, including having noticed a decrease in the number of mosquitos in the area due to the improvement in the river water flow. The benefit that was most noted by community participants and beneficiaries was that the area was cleaner after clearing solid waste pollution from the land and rivers. This work, coupled with the knowledge sharing on the dangers of littering and poor waste management by beneficiaries, has resulted in a reduction of dumping by residents. This cleanliness can be linked to a decrease in the risk of diseases associated with pollution, and reduction in risk of injury to humans and animals (e.g., reports that skin rashes no longer occurred after children played in the river, and a reduction in mosquitos), which are considered to be positive health outcomes [27].
From all the community respondents who reported to consume vegetables in the survey, more than half of the vegetables consumed were purchased from the WWWC, which shows that the programme provided a significant source of vegetables to the community. This has a positive impact on nutrition through facilitating improved access to a wider variety of fruit and vegetables, resulting in a more balanced diet, with positive effects on health and well-being [28]. WWWC vegetable irrigation was solely from river water.
The community held knowledge of the different programmes being undertaken by the WWWC. Most of the community respondents heard about or interacted with the community engagement (88.2%), invasive alien plant (IAP) control (64.7%), solid waste removal and management (58.8%), vegetable gardening (54.9%), recycling (49%), and river water quality monitoring (23.5%) teams. All respondents who noted the area being cleaner also had knowledge of all the WWWC programmes, showing that community members could relate the work being done by beneficiaries to the positive changes taking place in their community. Comments made in the survey indicated that beneficiaries were appreciated by the community for the knowledge that they shared with respect to environmental education and management.
Half of the external stakeholders, and over 40% of beneficiaries noted that the stream was cleaner after the programme was operational (Figure 2). Over 80% of stakeholders and one-third of beneficiaries noted that there was a decrease in invasive alien plants since the interventions were implemented. This was also visible from site observations (see Figure S1).
Of the nine benefits beneficiaries experienced from working as part of the WWWC (survey) (Figure 3), more than 60% of beneficiaries experienced six or more benefits, with 96% of beneficiaries listing education on the environment as a benefit, followed by new business opportunities (76%), and increased water security (72%). The first formalised community-based small business was developed by some of the beneficiaries, Envirocare Management Systems (Pty) Ltd., providing prospects for income through invasive alien plant control and water quality monitoring services. External stakeholders similarly perceived the benefits to beneficiaries as high, with 83% noting increased education, 92% noting increased business opportunities, and 83% recognising personal development as benefits to beneficiaries (Figure 3). From the nine personal interviews that were conducted with WWWC beneficiaries, it was apparent that the WWWC programme had a positive impact on all nine individuals in terms of personal development through education and training, feelings of self-improvement, and increased hope for the future (see Data S2a,b). WWWC also experienced some challenges related to cost recovery, entry requirements for training courses, and illegal dumping (see Data S2c).
An aspect of success that served to encourage sustainable participation in civic ecology initiatives was the increased knowledge, education, and training, which resulted in new skills that benefitted beneficiaries and the broader community, e.g., transitioning from subsistence farmer to small scale producer and undergoing first aid training (Data S2a). Such spin-off benefits to the broader community have strengthened social cohesion.
---
Nature and Ecosystem Services Enhanced by Civic Ecology Interventions
The natural areas that were enhanced by the interventions included terrestrial and aquatic habitats, e.g., wetlands, rivers/streams, riparian vegetation, and open space (natural areas zoned as public open space). The interventions made positive impacts on ecological areas, and were thus considered to have the potential to enhance ecosystem services. The habitats improved by the interventions are linked to the enhancement of numerous ecosystem services, including regulating services or Nature's Contributions to People (NCP), of water purification, flood mitigation, biological regulation, and/or disease control, as well as maintenance of biological diversity (genepool protection) (previously considered a supporting service [2], but now captured in regulating NCP [4]); cultural or non-material NCP of aesthetic, recreational, cultural, and education service; and provisioning services or material NCP of water supply, food, and harvesting products [4,29]. People accessed ecosystem services for water, agricultural production, and harvesting of medicinal plants and wood (see Table S1), and increased use of natural spaces for cultural and spiritual activities, since it had been cleaned by the beneficiaries, for example, using the wetland in Ezimbokodweni for cultural rituals (Umemelo-Zulu traditional coming of age ceremony for women) (see Figure S1).
---
Ecosystem Services Uses and Values
Ecosystem services were widely used and valued by the broader community (randomly selected residents) and beneficiaries (Figure 4). Ecosystem services used most were agricultural use (crop and livestock production), followed by subsistence use (use of natural resources to sustain life), and cultural uses. Beneficiaries valued subsistence ecosystem services the most, followed by aesthetic value and cultural value, while broader community members valued aesthetic, economic, and cultural services the most (Figure 4). Subsistence use-use of natural resources to sustain life, e.g., food, water; Aesthetic value: I enjoy the scenery and beauty of nature; Economic value-I benefit from nature through the sale of products, e.g., traditional medicine, vegetables, wood; Recreational value-I use natural spaces for leisure and outdoor activities. Life sustaining value-it produces goods, and renews air, water, and soil; Spiritual value-natural spaces are valued as being sacred for my religious practices. Cultural value-Natural spaces are important for my cultural practices and rituals and as a place for transferring cultural knowledge through generations; Subsistence value-it provides me with goods to sustain my life, e.g., food and water.
River water was used most for the irrigation of subsistence crops, followed by livestock and personal use (see Figure S2). Participants also used river water for recreation, which was reported to have increased due to the improvement in the cleanliness of the area and the water, since WWWC had been operating. People reported to use the "now clean" river water for washing clothes and cars, as well as for flushing toilets. Business use (by beneficiaries and community members) of river water was for car washing, brick making, livestock, and sales from crop production. More beneficiaries used river water than broader community members for each category. During the workshop, locations of access to ES were reported, including wood and medicinal plant harvesting collection points in adjacent forests, recreational areas, and religious gathering sites. Threats and opportunities related to WWWC operation were also identified (see Table S1). In terms of frequency of river water use by community members and beneficiaries, respectively 28.5% and 40.7% used river water daily, 35.7% and 0% weekly (no beneficiaries reported to use river water weekly), 21.4% and 3.7% used river water monthly, and 14.2% and 48.1% used river water seasonally.
---
Discussion
---
Civic Ecology Contributes to Social-Ecological System Benefits and Ecosystem Service Protection and Enhancement
High use of ecosystem services highlights the importance of natural capital for the livelihoods of people in the community. Similar to other studies, ecosystem services were widely used and valued by the community, and even more so by the beneficiaries as a means to enhance well-being through the mitigation of poverty and diversifying household livelihoods, enhance food security and access to nutritious food, enhance health, improve personal safety and security, access clean water and air, and promote social cohesion [2,30,31]. As found in similar studies, civic ecology practices were also initiated in response to a natural disaster (flood in 2016) [32]. In so doing, the beneficiaries were able to mitigate ecosystem disservices, through environmental management and enhancement of ecosystem services. This led to positive outcomes for both the beneficiaries and their communities [33].
This study confirms that civic ecology practices contribute to the provision of a variety of ecosystem services, including cultural services such as education and learning, social relations, and recreation [7]. We confirmed links between spiritual values and resource management [34], whereby management, environmental protection, and stewardship, increase when people associate spiritual and cultural value with natural areas [35].
The social-ecological interactions in the community influence the manner in which people value the environment, whereby valuation of biodiversity is determined by the practical function obtained from the ecosystems and ecosystem services that enhance the livelihoods of individuals [36]. The perceptions of values identified in this study assert that there is strong dependence of people on ecosystem services, and their understanding of this dependence has, in turn, motivated them towards voluntary environmental stewardship.
We confirm that civic ecology practices both sustain human health [37] and lead to the creation of new natural capital [38]. Our study supports the understanding that local communities can benefit from projects that aim to integrate sustainable development and environmental management, and can create positive attitudes and perceptions towards conservation initiatives [39]. Such projects should aim to incorporate the environmental, social, and economic dimensions, including sustainable use of ecosystem goods and services, promoting dignified standards of life, and providing employment opportunities [39].
The results have governance implications. The interventions were able to address some of the impacts on Durban's rivers [20] and enhance terrestrial habitats within Critical Biodiversity Areas that are crucial to meet biodiversity targets [40], thereby reducing the pressure on government authorities who are mandated to manage these areas for conservation purposes. The outcomes of this study related to ecosystem service uses by disadvantaged communities can also be considered by authorities in preparing conservation plans, where such understanding may assist in determining the capacity of ecosystems to support both social and ecological communities [26]. This study highlights that local communities can leverage natural capital for well-being and social-ecological improvements and encourages policy support of civic ecology initiatives.
---
Civic Ecology Provides Opportunities for Social Cohesion and Personal Development
We show that social cohesion is critical for the achievement of sustainability and well-being [2], and that ecosystem services provide a basis for spiritual, cultural, and social cohesion experiences [4]. Such perceptions, when coupled with scientific evidence of positive outcomes of management interventions, provide a powerful combination for ensuring the sustainability of civic ecology programmes.
Positive perceptions of community members of the impacts of environmental management can ensure both support for, and long-term sustainability of, management initiatives [41]. The perceptions of the direct relationships between the positive social-ecological changes taking place in the area and the work being done by the beneficiaries has strengthened social cohesion in the community.
The involvement of the community in the selection and implementation of the interventions strengthened the sustainability of the interventions. Our study provides evidence that, contrary to the notion of the tragedy of the commons [42], by taking ownership and control of natural capital, local communities can successfully contribute to improved collective human well-being.
---
Conclusions
Our study showed that increased knowledge of ecosystems through civic ecology practices contributed to the protection and increased use and benefit of ecosystem services, both for beneficiaries and other members of disadvantages communities. Civic ecology practices have the potential to uplift impoverished communities through providing opportunities for education, as well as enhanced ecosystem service protection and access, and should, therefore, be encouraged and supported by government and policy. Given that contributions of civic ecology groups are increasingly recognised by governments for their contribution to natural capital, they need to be supported by the government and the private sector through policies aimed at achieving sustainability and well-being [43].
This study provides evidence of the potential for civic ecology initiatives, supported by private practice, to overcome the tragedy of the commons and enhance ecosystem services for low-income communities who are directly dependent on ecosystem services for their livelihoods and well-being. We call for increased governance support of similar civic ecology initiatives as a means to capacitate local communities to take ownership of natural capital and make gains in the plight against poverty and environmental degradation.
---
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
---
Supplementary Materials:
The following are available online at https://www.mdpi.com/2071-1 050/13/3/1300/s1: Figure S1: Ezombokodweni Wetland 2015 (before WWWC) and 2018 (after WWWC). Figure S2: Natural water used by beneficiaries and community members. Table S1: Socialecological system workshop findings. Data S1: Questionnaires/surveys. Data S2a: Stories of change; Data S2b: Comments made by beneficiaries, community members, and external stakeholders; Data S2c: WWWC challenges. Funding: This research is part of the SHEFS-an interdisciplinary research partnership forming part of the Wellcome Trust's funded Our Planet, Our Health programme, with the overall objective to provide novel evidence to define future food systems policies to deliver nutritious and healthy foods in an environmentally sustainable and socially equitable manner. This research was funded by the Wellcome Trust through the Sustainable and Healthy Food Systems (SHEFS) Project (grant no. 205200/Z/16/Z). The South African Research Chairs Initiative of the Department of Science and Technology and the National Research Foundation of South Africa (grant no. 84157) financially supported the research. The funding support of i4Water is also acknowledged for commissioning Rashieda Davids and Margaret Burger to undertake an associated study that facilitated data collection for this study.
---
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; or in the decision to publish the results. | 36,381 | 1,555 |
52e2c86ebe51f349d2cd2b4f9735a7944dfbd323 | Factors associated with seasonal influenza and HPV vaccination uptake among different ethnic groups in Arab and Jewish society in Israel | 2,021 | [
"JournalArticle"
] | Background: Parents in the Arab population of Israel are known to be "pro-vaccination" and vaccinate their children at higher rates than the Jewish population, specifically against human papilloma virus (HPV) and seasonal influenza. Objectives: This study seeks to identify and compare variables associated with mothers' uptake of two vaccinations, influenza and HPV, among different subgroups in Arab and Jewish society in Israel. Methods: A cross-sectional study of the entire spectrum of the Israeli population was conducted using a stratified sample of Jewish mothers (n = 159) and Arab mothers (n = 534) from different subgroups: Muslim, Christian, Druse and Northern Bedouins. From March 30, 2019 through October 20, 2019, questionnaires were distributed manually to eighth grade pupils (13-14 years old) who had younger siblings in second (7-8 years old) or third (8-9 years old) grades. Results: Arab mothers exhibited a higher rate of uptake for both vaccinations (p < .0001, HPV -90%; influenza -62%) than Jewish mothers (p = 0.0014, HPV -46%; influenza -34%). Furthermore, results showed that HPV vaccination uptake is significantly higher than seasonal influenza vaccination uptake in both populations. Examination of the different ethnic subgroups revealed differences in vaccination uptake. For both vaccinations, the Northern Bedouins exhibited the highest uptake rate of all the Arab subgroups (74%), followed by the Druse (74%) and Muslim groups (60%). The Christian Arab group exhibited the lowest uptake rate (46%). Moreover, the uptake rate among secular Jewish mothers was lower than in any of the Arab groups (38%), though higher than among religious/traditional Jewish mothers, who exhibited the lowest uptake rate (26%). A comparison of the variables associated with mothers' vaccination uptake revealed differences between the ethnic subgroups. Moreover, the findings of the multiple logistic regression revealed the following to be the most significant factors in Arab mothers' intake of both vaccinations: school-located vaccination and mothers' perceived risk and perceived trust in the system and in the family physician. These variables are manifested differently in the different ethnic groups. | Background
The research literature identifies different types of decision-makers in the context of vaccinations: provaccination, hesitant (selective choice of when and for what to vaccinate) and anti-vaccination. Each type is marked by its own considerations and decision-making processes. Most studies point to lower levels of vaccination among minority population groups than among dominant groups [1][2][3][4].
Arabs living in Western countries as minority groups tend to vaccinate their children less than the dominant national group [5][6][7][8][9]. Nevertheless, a few studies show a higher vaccination rate among the children of Arab minorities living in Western countries [10].
The low vaccination uptake rate among minority groups in high income countries stems from various reasons. The main ones include lack of trust [11][12][13]; hostility toward the government [14]; medical staff language barriers and inability to understand patients' values, norms, language and behavior [6,15,16]; opposition to institutional recommendations [17]; inability to integrate into the life of the dominant society [18,19]; limited knowledge about vaccinations [16,20]; other social and economic factors, such as high income [21][22][23] and low educational level [6,24,25], both of which increase the chances of vaccination uptake; traditional beliefs [26]; and sex of child in the case of the HPV vaccine [27,28].
As opposed to the aforementioned Arab minority groups living in Western countries, parents in the Arab population of Israel are known to be "pro-vaccination" and tend to vaccinate their children at higher rates than the Jewish population, specifically against the human papillomavirus (HPV) and seasonal influenza [29].
These two vaccinations were recently introduced into the Israeli schools. The influenza vaccine is given at school both to boost vaccination uptake rates and because influenza is a common infectious disease among children. Moreover, the HPV vaccine can be targeted at school before children become sexually active. In 2013, the HPV vaccine was included as part of the planned routine vaccines given in school to girls in the eighth grade, and was later extended to include all eighth graders (13-14 years old), including boys [30]. According to the Israel Ministry of Health [30], in 2016 the uptake rate for the HPV vaccine in Arab schools reached 84% (96% among the Northern Bedouins), compared to 40% among the Jewish population.
Similarly, in 2016 s grade pupils (7-8 years old) in Israel began receiving the live attenuated seasonal influenza vaccination at school. In 2017, third graders (8-9 years old) were also included in the school-located influenza vaccination program, with some children receiving the first dose of the vaccine and some receiving the second. Beginning in September 2018, fourth graders were also included in the school-located vaccination program, such that during the 2019-2020 influenza season, all pupils in the second to fourth grades were offered one dose of the seasonal influenza vaccine at school [31].
After the seasonal influenza vaccine was introduced to the school-located vaccination program in the 2016-2017 influenza season, the uptake rate for second graders in the Arab schools was 84%, compared to 47% among the Jewish population. The Ministry of Health's vaccination report for 2019 points to higher vaccination coverage in the Arab schools (81.4%) than in the schools in the Jewish sector (44-54%). The primary reason for not vaccinating children was parental refusal (94%). Children who required a second dose and never received influenza vaccinations in the past were instructed to complete the vaccination at their HMO (Health Maintenance Organization) [32]. It should be noted that uptake of these two vaccinations among the Jewish population is much lower than uptake of other routine school vaccinations, such as MMRV (96%) and Dtap IPV (95%) [33].
Alongside a large body of evidence indicating the effectiveness and safety of the HPV vaccine [34][35][36], the research literature also reveals a scientific controversy surrounding the safety of this vaccine. Several smaller studies examining the HPV vaccine reported side effects, some relatively minor such as pain at the injection site, fainting and dizziness, and some more serious, such POTS (Postural Orthostatic Tachycardia Syndrome), neurological disturbances (CRPS-complex regional pain syndrome), leg paralysis, autoimmune diseases and sympathetic nervous system deficiencies [37][38][39]. Barriers to the HPV vaccination are related to taboos in conservative societies prohibiting sexual relations before marriage [40][41][42][43]. These fears are common among the Arab population as a whole, and particularly among the Muslim population, as well as among orthodox Jews [44][45][46][47].
Moreover, despite studies pointing to the effectiveness of the seasonal influenza vaccine [48][49][50], some studies report a controversy surrounding its effectiveness [43][44][45][46]. Studies have pointed to varying effectiveness according to age group: 54% (age 6-17), 61% (under the age of 5), 70% (6 months-8 years), 73% (2-5 years), 78% (6 months to 7 years). Regarding influenza vaccine efficacy, different research studies have also shown that vaccine efficacy varies by age group: 28% (age 2-5), 59% (6 months-15 years), 60% to 83% (6 months -7 years), 61% (under the age of 5, age 6-17) and 69.6% (5-17 years) [5,51,52].
In 2020 the Arab population of the State of Israel numbered about two million people, constituting 25% of the general population. Of these, 82% are Muslim, 9% are Christian and 9% are Druse. Fifty-three percent of Arab families live in poverty, compared to 14% of Jewish families. Over the years, the educational level of the Arab population has improved, yet the educational gaps between Arabs and Jews remain large. Of Arab women between the ages of 25 and 34, 29% completed 16 or more years of education, compared to 50% of Jewish women in the same age group [53].
Jewish society is divided into several groups: secular (45%), traditional (35%), religious and very religious (16%) and ultra-Orthodox (14%). In this study we examined the traditional group, which is located on a spectrum somewhere between religious and secular [54]. For the most part, traditional Jews observe specific commandments and traditions considered to be clear signs of traditional belief. They do so not necessarily out of strict compliance with Jewish law but rather out of a sense of identification and belonging with the Jewish people or out of a belief that these traditional values must be safeguarded to guarantee the existence of the Jewish people [54].
The Israeli school system is marked by a great deal of segregation. Arabs and Jews do not attend the same schools. Moreover, the very religious and ultra-Orthodox groups attend different schools from the secular and traditional groups and sometimes from each other [55], leading to inequality in education, research and policy. Jews and Arabs also tend to live in different residential areas under separate municipal authorities, pointing to spatial politics and discrepancies between Jews and Palestinians within Israel [56].
In view of the interesting phenomenon of high vaccination rates among the Arab population of Israel, this study focuses on the factors related to decision-making among Arab mothers in Israel regarding these two vaccinations: seasonal influenza and HPV. These two vaccinations were chosen for two reasons: 1) They were recently introduced to the school-located vaccination program. 2) Both are a matter of controversy-regarding safety in the case of the HPV vaccination and regarding effectiveness in the case of the influenza vaccination.
In addition, very few research studies have examined vaccination uptake rates among the various subpopulations in Arab society, with most research tending to consider the Arab population as a single entity. This study seeks to examine these two issues. It investigates the variables influencing vaccination uptake among subgroups in the Arab population (Muslims, Christians, Druse and Northern Bedouins), while comparing vaccination uptake to that of the national Jewish population (secular and religious groups).
The overarching goal of this study is to rank the extent of uptake of these two vaccinations-seasonal influenza and HPV-among the subgroups in Arab society and in Jewish society, from the highest uptake rate to the lowest.
The specific research objectives are as follows:
To compare vaccination uptake of the HPV and seasonal influenza vaccines in the Arab population to that in the Jewish population.
To identify and characterize the variables associated with mothers' uptake of these two vaccinations.
To compare vaccination uptake of the HPV and seasonal influenza vaccines in the different ethnic subgroups To compare the differences between ethnic subgroups for each variable associated with mothers' vaccination uptake.
---
Methods
---
Research population
The research population included mothers with children in both of the following two age groups:
1. A child in second or third grade, such that the mothers must decide whether their child should get the seasonal influenza vaccination that was recently introduced to the school-located vaccination program. 2. A child in the eighth grade, such that the mothers must decide whether their child should get the HPV vaccine, also part of the school-located vaccination program.
Mothers who had children both in elementary school and in middle school were included, while mothers who had children in only one of these two age groups were excluded from the study. We chose mothers with two children of different ages in order to compare the mothers' decision-making with respect to the two different vaccinations. Our rationale in choosing mothers was that commonly mothers are the primary parent in the family when it comes to making decisions about vaccinations [29].
---
Sampling method and research procedure
The sample was chosen by means of stratified sampling [57] according to the ethnic subgroups examined. The sampled subgroups were of equal size rather than in accordance with their relative proportion in the population of Israel. Hence, each group had the same number of participants, facilitating group comparisons. After the study was approved by the Ethics Committee of the Faculty of Social Welfare and Health Sciences at the University of Haifa (Approval No. 118/16), participants were recruited by means of stratified heterogeneous sampling [58] at schools in a number of different localities in Israel. During the period March 30, 2019 through October 20, 2019, questionnaires were distributed manually to eighth-grade pupils who had younger siblings in second or third grade. The children who met the study's inclusion criteria were given a letter asking their parents to participate in the study and providing the researchers' contact details. Parents who indicated their willingness to participate (gave their informed consent) received a questionnaire, which they returned to the school a few days later. The response rate was 92%. The sampling method was manual rather than via an online questionnaire because a substantial portion of the Arab population, and particularly the Northern Bedouin population, has low digital literacy [59].
---
Research tools
Prior to the quantitative study described in this paper, we conducted preliminary qualitative research using personal interviews with mothers of children at the targeted ages. The interviews focused on decision-making with respect to vaccinations [29]. Based on the results of this preliminary qualitative research and on validated questionnaires from the research literature focusing on different variables relevant to our research objectives [43,60], we constructed a questionnaire (see Additional file 1) that was culturally adapted to the different subgroups in our study. After constructing the questionnaire, we calculated the Cronbach's alpha value for items that appeared to be associated with measures of theoretical significance in order to validate each measure. Cronbach's alpha is used to provide a measure of the internal consistency of a test or scale and is expressed as a number between 0 and 1. Internal consistency describes the extent to which all the test items measure the same concept or construct and hence reflects the inter-relatedness of the items within the test [61]. The questionnaire included socio-demographic data such as respondent's age, number, age and sex of children, education, income, residential area, level of religiosity and ethnicity. It also included questions about vaccination uptake based on the mothers' self-reports regarding the two relevant vaccinations recommended by the Ministry of Health: seasonal influenza and HPV. The statements in the first part of the questionnaire referred to variables related to vaccinations in general (called "general variables"). These included attitudes toward vaccinations (e.g., "All vaccinations recommended by the health authorities are safe"); trust in doctors (e.g., "When it comes to vaccinations, I trust my family doctor because he is the expert and knows more than I do"); trust in the system (e.g., "I trust the health system in Israel because of its high quality of care and service"); and low health literacy, referring to the extent to which the mothers think they are capable of seeking and reading information about vaccinations (e.g., "I don't have time to look for information about vaccinations so I make do with what the medical team (nurse and doctor) tells me").
The statements in the second part of the questionnaire focused on variables associated with each vaccination separately (called "specific variables"). For example, with respect to perceived risk, the questionnaire included statements about perceived risk of each disease and perceived risk of each vaccination (influenza and HPV, respectively). It also included statements related to perceptions regarding the inclusion of these vaccinations in the school-located vaccination program as a legitimizing factor for giving children these two vaccinations. Respondents were instructed to respond to each statement on a five-point Likert scale. The statements were grouped and defined as independent variables according to subject area (attitudes, trust, low health literacy and inclusion in the school vaccination program).
An examination of the correlations between all the independent variables yielded correlation coefficients less than 0.5. Therefore, we ran a multiple regression model. We also examined the associations between these variables and the dependent variable (uptake of the two types of vaccination: seasonal influenza and HPV) (see Table 1).
---
Reliability and validity
During questionnaire construction, the questions were formulated in Hebrew and translated into Arabic. They were then translated into Arabic a second time by a second translator to examine their cultural appropriateness and wording. After that, we conducted a pilot study among a sample of 80 participants to validate the content and check the wording to make sure it was culturally appropriate for the target population. After data collection and entry, quality control was applied to discover any errors in data entry. The quality control entailed examining the range of data for each question and generating distributions. In addition, the variables were examined for outliers [62] and tested to determine whether they met the assumption of normality.
---
Data analysis
To compare vaccination uptake between the Jewish and Arab populations, we calculated the uptake rates for the two groups for the two vaccines. We used McNemar's test to examine the significance of the differences between the uptake of the two vaccines in each of the subgroups. To identify the variables associated with mothers' uptake of the two vaccines, we first conducted separate multiple logistic regressions according to type of vaccination, with uptake of the specific vaccine-HPV or influenza-as the dependent variable. Examination of the correlations between all the independent variables yielded coefficients that were all less than 0.5. Therefore, we were able to run a multiple regression model assuming no multicollinearity. We ran the multiple regression in two stages: In the first stage we ran the general variables and the specific variables in the multiple regression model to test the effect of each variable. In the second stage, we removed the variables that were not significant and ran the multiple regression again with the significant variables only to examine the exact effect of the variables on vaccine uptake. To examine the differences between the various subgroups with respect to variables associated with mothers' uptake, first we used descriptive statistics and calculated the means of the variables among the different ethnic groups. Second, we conducted posthoc testing for all the dependent variables: attitudes, trust in the system, trust in the doctor, low health literacy, school-located vaccination program, and risk perception of both vaccines. We then conducted a multiple comparison analysis using the Tukey correction to examine the significant differences between the various ethnic groups.
---
Results
---
Sample description
A total of 693 mothers participated in the study. The participants included mothers from almost the entire spectrum of the Israeli population. The Arab population was defined as the primary research population, while the national Jewish population (secular and religious/ traditional groups) served for comparison purposes. Note that the ultra-Orthodox population was not included in the study. Table 2 shows their sociodemographic characteristics, followed by the mothers' education by ethnic groups and monthly income by ethnic groups (Tables 3 and4
---
respectively).
---
Differences in uptake between Arab and Jewish populations
The research findings reveal differences in uptake of the two vaccinations between the Arab and Jewish populations, such that Arab mothers have a higher uptake rate for both vaccinations (HPV -90%; influenza -62%) than Jewish mothers (HPV -46%; influenza -34%) (Fig. 1).
The differences shown above are statistically analyzed in subsequent sections. Note that due to differences between the two vaccinations, we analyzed each of them separately. In addition, we found that in each case different factors influence vaccination uptake. Therefore, to examine the variables associated with mothers' uptake of the two vaccinations, we computed two multiple logistic regression models and entered ethnicity as an independent variable in each. The models examined both general and specific variables associated with vaccine uptake.
Furthermore, McNemar's test results reveal significant differences in uptake according to type of vaccination, showing that uptake of the HPV vaccination is significantly higher than uptake of the seasonal influenza vaccination in both populations: Arab (p < .0001) and Jewish (p = 0.0014).
---
Variables specifically associated with mothers' uptake of seasonal influenza vaccination
The first model for seasonal influenza vaccination included the general variables of ethnicity, attitudes, trust in the system, trust in the family doctor, school-located vaccination program and health literacy and the specific variables of vaccine risk perception and disease risk perception. On this model, the general variables of attitudes (p = 0.3286) and trust in family physician (p = 0.2715) were not significant. Therefore, to examine the precise effect of each variable on influenza vaccination uptake we decided to eliminate these two variables and run the multiple regression with significant variables only. Trust in the medical system was significant in the first model (p = 0.0199), but was no longer significant when entered into the reduced model. Therefore, the reduced model did not include this variable. Table 5 shows the variables found to be significantly associated with uptake of the seasonal influenza vaccination.
The results show that the odds of flu vaccination uptake among Arab mothers is above three times the odds among Jewish mothers. Low health literacy is positively associated with Flu vaccination uptake, where for each unit for literacy index, the odds of the uptake increases by 43%. Inclusion in the school-located vaccination program is positively associated with Flu vaccination uptake, where for each unit for Inclusion in the school-located index, the odds of the uptake increases by 84%. Perceived risk of influenza vaccination is negatively associated with Flu vaccination uptake, where for each unit for Perceived risk of influenza vaccination index, the odds of the uptake decreases by 75%. Perceived risk of seasonal influenza disease is positively associated with Flu vaccination uptake, where for each unit for Perceived risk of seasonal influenza disease index, the odds of the uptake increases by 75%.
---
Variables specifically associated with mothers' uptake of HPV vaccination
The first model for HPV vaccination included the general variables of ethnicity, attitudes, trust in the system, trust in the family doctor, school-located vaccination program and health literacy and the specific variables of vaccine risk perception and disease risk perception. On this model, the general variables of attitudes (p = 0.3147), trust in family physician (p = 0.4995), low health literacy (p = 0.1324) and disease risk perception (p = 0.7337) were not found to be significant variables. Therefore, to examine the precise effect of each variable on HPV vaccination uptake we decided to eliminate these variables and to run the multiple regression with significant variables only. A moshav is a form of rural living unique to the State of Israel in which a group of residents live together in a joint financial arrangement. These residents are known as moshav members. Unlike the historical kibbutz framework, in the moshav the family is an independent financial unit operating in a framework of mutual assistance. Every moshav member is allocated a plot of land, which in most cases is used for agriculture [63] b A kibbutz is a form of communal living unique to Zionism, the pre-state Yishuv period and the State of Israel, based on Zionist aspirations to resettle the Land of Israel as well as on the socialist values of human equality and of a joint economy and ideology. A kibbutz is usually a small locality with only a few hundred residents and supports itself through agriculture and industry [64] Table 6 shows the variables found to be significantly associated with HPV vaccination uptake:
The results show that the odds of HPV vaccination uptake among Arab mothers is above six times the odds among Jewish mothers. Trust in the health system is negatively associated with HPV vaccination uptake, where for each unit for Trust in the health system index, the odds of the uptake decreases by 26%. Inclusion in the school-located vaccination program is positively associated with HPV vaccination uptake, where for each unit for Inclusion in the school-located index, the odds of the uptake increases by 51%. Perceived risk of HPV vaccination is negatively associated with HPV vaccination uptake, where for each unit for Perceived risk of HPV vaccination index, the odds of the uptake decreases by 61%. Besides, the odds of HPV vaccination uptake for female youth is 59% lower than the odds of uptake for male youth.
---
Differences in mothers' uptake of the two vaccination types by ethnic group
Examination of the ethnic subgroups reveals differences in mothers' vaccination uptake. With respect to mothers' uptake of the seasonal influenza vaccination, the highest uptake rates were found in the Northern Bedouin (74%) and Druse (74%) groups, followed by the Muslim group (60%). The lowest uptake rate in Arab society emerged among the Christians (46%). Moreover, secular Jewish mothers exhibited a lower uptake rate (38%) than any of the Arab groups, though higher than the religious/traditional Jewish mothers (26%), who exhibited the lowest uptake rate. With respect to HPV vaccination, the Northern Bedouin population exhibited the highest uptake rate (99%) of all the subgroups. The Druse population also exhibited a relatively high uptake rate (92%), as did the Muslim group (92%). Again the Christians exhibited the lowest uptake rate among the Arab society (82%). the secular Jewish mothers exhibited an HPV uptake rate of (53%), which was lower than all the Arab subgroups yet higher than the religious/traditional Jewish mothers (33%), who exhibited the lowest HPV vaccination uptake rate (see Fig. 2).
The results of the McNemar's test (Table 7) show that in addition to differences between the ethnic groups with respect to uptake of the two vaccinations, each ethnic group (except for the religious Jewish group) exhibited significant differences in uptake according to vaccination type: HPV vs. seasonal influenza. The findings show that HPV vaccination uptake is significantly higher than seasonal influenza vaccination uptake in all the subgroups except for the religious Jewish group, where the difference is not significant.
---
Variables associated with vaccination uptake according to ethnic subgroup
Examination of the variables associated with uptake of the two vaccinations according to ethnic subgroup revealed differences in the means of both the general and the specific variables for each vaccination type, as illustrated in Tables 8,9, 10, 11, 12, 13, 14 and the accompanying Figs. 3,4, 5, 6, 7, 8, 9.
The ANOVA for the dependent variable of trust in the health system revealed a significant difference between the different ethnic groups [F(5,687) = 24.13, P < 0.0001]. Multiple comparison analysis using the Tukey correction to examine the significant differences between the ethnic groups showed that Christian, Muslim and Druse women had a significantly higher level of trust in the health system than Jewish women (secular and religious) and Bedouin women.
The ANOVA for the dependent variable of trust in the family doctor revealed a significant difference between the ethnic groups [F(5,687) = 19.45, P < 0.0001]. The multiple comparison analysis using the Tukey correction showed that Bedouin women exhibited a significantly higher level of trust in the family doctor than all the other groups, except for Druse women. Moreover, the level of trust in the family doctor among Jewish women (secular and religious) was significantly lower than that of Arab women in all the ethnic groups.
The ANOVA for the dependent variable of Low health literacy revealed a significant difference between the ethnic groups [F(5,687) = 52.04, P < 0.0001]. Multiple comparison analysis using the Tukey correction showed that Bedouin women exhibited the highest level of Low health literacy, with a significant gap between them and all the other groups. Secular Jewish women exhibited the lowest level of Low health literacy, with a significant gap between them and three other groups-Bedouin, Druse and Muslim women.
The ANOVA for the dependent variable of general attitudes toward vaccination revealed a significant difference between the ethnic groups [F(5,687) = 24.53, P < 0.0001]. Multiple comparison analysis using the Tukey correction showed that Bedouin women exhibited the highest level of support for vaccinations, significantly higher than that of all the other groups. Druse and Muslim women were second in their level of support for vaccinations. The other groups-Christian and Jewishexhibited a lower level of support for vaccination, with religious Jewish women exhibiting the lowest level of support, significantly lower than all the other groups with the exception of secular Jewish women.
The ANOVA for the dependent variable of vaccinations given at school revealed a significant difference between the ethnic groups [F(5,687) = 41.67, P < 0.0001]. Multiple comparison analysis using the Tukey correction showed that giving the vaccinations at school was the most significant factor for Bedouin women, significantly higher than for all the other groups. Jewish women (secular and religious) rated this factor as significantly lower than the Arab women from all the ethnic groups.
The ANOVA for the dependent variable of risk of seasonal influenza vaccine revealed a significant difference between the ethnic groups [F(5,687) = 2.81, P = 0.0161]. Multiple comparison analysis using the Tukey correction showed that perceived risk of the seasonal influenza vaccine was significantly higher among religious Jewish women than among Muslim and Druse women. No other significant differences in perceived risk were found among the other ethnic groups.
The ANOVA for the dependent variable of risk of seasonal HPV vaccine revealed a significant difference between the ethnic groups [F(5,687) = 28.4, P < 0.001]. Multiple comparison analysis using the Tukey correction showed that perceived risk of the HPV vaccination was significantly higher among Jewish women (secular and religious) than among Arab women in all the ethnic groups. Moreover, a significant difference in level of perceived risk of the HPV vaccination was found between Christian and Bedouin women, with Christian women perceiving the vaccination as riskier than Bedouin women.
For summary, Among the general variables, trust in the family doctor exhibited the highest mean in all the The variable of low health literacy exhibited a low mean in all the ethnic groups except for the Northern Bedouin mothers, who reported major difficulties in searching for information about vaccinations. The Christian mothers had the highest literacy of all the Arab groups in searching for information, and the secular Jewish mothers had the highest literacy of all the subgroups. With respect to seasonal influenza vaccination, Jewish mothers (and specifically religious as opposed to secular mothers) perceive the vaccination as more risky than Arab mothers from all the subgroups, except for Christian mothers, whose risk perceptions were equivalent to those of the secular Jewish mothers. With respect to the HPV vaccination, the highest risk perceptions were among the religious Jewish mothers and the lowest among the Northern Bedouin mothers.
---
Discussion
This pioneering research study provides an in-depth examination of decision-making processes among subgroups in Arab society in Israel with respect to two vaccinations recently introduced to the school-located vaccination program: the HPV vaccination and the seasonal influenza vaccination. The study describes the variables associated with vaccination uptake among subgroups in Arab society as well as among certain segments of the Jewish population (secular and religious Jews). The study's findings show that the variable of including the two vaccines in the school program is the primary variable influencing Arab mothers' decisionmaking with respect to the HPV and seasonal influenza vaccinations. Vaccination inclusion in the school-located vaccination program encourages parents to vaccinate their children and increases the chances of vaccination uptake. With respect to framing strategies in health communication, vaccination inclusion in the schoolbased program grants the vaccination medical legitimacy, which also influences parental uptake [65]. These findings are in line with those of other studies showing various reasons for parental preference for vaccinating their children at school, among them lack of access to medical services, limited time to take children for vaccinations, inability to leave work for this purpose and more [66][67][68]. Perceived risk of the vaccination itself is also associated with mothers' decision-making processes. This finding is compatible with other studies showing that parents decide not to vaccinate their children based on high risk perceptions related to a lack of trust in vaccination safety [14,50,69,70]. Moreover, as in many studies, the findings of this study indicate that high risk perceptions about the illness are also associated with mothers' uptake of the vaccinations. That is, the more risky mothers perceive an illness, the greater their chances to uptake a vaccination that prevents it [7,8,[71][72][73].
The findings also show an association between trust in the medical system and decision-making with respect to the HPV vaccination. Other studies that examined decision-making for HPV vaccination among parents in Arab minority groups in Western countries also found this variable to be significant [74][75][76]. Yet despite high vaccination compliance, trust in the system is not very high even among the subgroups of Arab mothers. These findings can be explained by two factors: 1) Campaigns and explanatory materials designed to promote HPV vaccination in Arab society are not sufficiently transparent and lack cultural appropriateness [2,65]; 2) The recommendations of doctors and nurses, considered by Bedouin society to be reliable sources of information, are not sufficiently explicit [29,75,77].
Contrary to the findings of many studies worldwide, the findings of the current study show that health literacy and difficulties in searching for information about vaccinations are positively associated with mothers' decision-making. That is, the lower the mothers' health literacy and the more difficulties they have in searching for information, the more likely they are to uptake vaccinations [78][79][80][81]. This high vaccination uptake rate despite low health literacy can be explained by the fact that these mothers do not search for impartial information about the vaccination but rather receive their information exclusively from the health system. Because they do not search for information, these mothers are not exposed to the scientific controversy surrounding the HPV vaccine [37][38][39] or to the questions raised about the effectiveness of the influenza vaccine [5,51,52]. Various studies have shown that minority groups usually have low health literacy, are less exposed to scientific controversies surrounding vaccinations and are less hesitant about vaccinations [2,29,82].
With respect to the sex of the child in the case of the HPV vaccination, the results of the current study are in line with other studies showing that the child's gender plays a role in mothers' decision-making regarding the HPV vaccination [27,28,50,68,82]. Indeed, the findings of the current study show that mothers are more likely to vaccinate boys than girls. In conservative societies, and particularly in Arab society, the matter of sexuality is generally taboo, particularly among women. Therefore, men in conservative societies are thought to be more likely to engage in frequent sexual relations than women, leading to the assumption that mothers are more likely to decide to give the HPV vaccination to their male children [83,84].
With respect to the various population subgroups, the findings point to differences in mothers' uptake rates. Specifically, the Northern Bedouin population emerged as the group with the highest vaccination uptake rate among all the Arab subgroups. We propose several explanations for this finding. First, it is possible to assume that these high vaccination rates derive from the fact that a significant portion of Northern Bedouin mothers are illiterate (more than 60%) [85]. Consequently, their health literacy is low and their ability to search for, read and analyze health information in general and information about vaccinations in particular is limited [86][87][88].
Several studies indicate that mothers with a high level of education have lower vaccination uptake rates due to their ability to search for information about vaccinations and make decisions based on facts and on "informed consent" [89]. Furthermore, the findings show that Bedouin mothers vaccinate their children despite their mistrust of the health system. It is reasonable to assume that the main and perhaps only information source for Northern Bedouin mothers is the Ministry of Health. Studies have shown that Bedouin mothers usually take institutional health directives seriously and implement them regardless of their level of trust [89,90]. Moreover, despite this low level of trust in the system this group has a very high level of trust in doctors, making the family doctor's recommendation a highly influential factor in mothers' decision-making regarding vaccines. Thus they fully adopt the recommendations of the ministry or the doctor representing the health system [73,91]. These findings contradict the findings of two studies conducted among the Bedouin population in the south of Israel, which showed that these Bedouins do not complete their children's vaccination programs due to their lack of access to health services and lack of trust in the government [7,73,91]. It is important to note that the Bedouins living in the south, mainly those in unrecognized villages, have less convenient access to medical services than those living in the north examined in this research, whose superior access to medical services enables them to complete the vaccination programs.
The results of this study also show that the Druse population has the second highest uptake rates for both vaccinations. There are several ways to interpret this finding. Many members of the Druse population serve in the Israeli military forces. This fact, together with their high levels of trust in the government and its decisionmakers [92], may explain their high uptake of various types of vaccinations. Moreover, a substantial portion of the Druse population identifies itself with the dominant Jewish national group rather than the minority Arab population. Over the years, a picture has emerged of Druse solidarity with the Zionist ethos, while the Druse simultaneously distance themselves from the Arab and Islamic themes resonant among the Israeli-Arab sector of society [86,93]. Their desire to be part of the dominant Jewish population may lead to their similar or even higher vaccination uptake. Yet this interpretation may be qualified by the recently formulated basic law defining Israel as the Nation-State of the Jewish People, 1 which may influence the reciprocal relations between the Druse and the State of Israel. Hence, future research is needed to verify this interpretation. The research findings also indicate that Muslim mothers are third in uptake rate for the two vaccinations. Examination of the variables associated with vaccination uptake shows that the variable of inclusion in the school-located vaccination program is one of the most significant variables associated with Muslim mothers' decision-making about the two vaccinations. It is possible to assume that including these vaccinations in the school program provides these mothers legitimization to vaccinate their children along with a convenient way to do so [7][8][9]29].
With respect to the Christians, the final subgroup in the Arab population, the findings show that Christian mothers have the lowest vaccination uptake rate of all the Arab subgroups for both vaccinations. This finding can be explained by the fact that the Christian Arab population differs from the Muslim, Northern Bedouin and Druse groups in that they are more educated. Indeed, Christian society is marked by high socioeconomic status and a more modern lifestyle (for example, lower fertility rates) [94,95]. Their relatively low vaccination uptake may be tied to their higher education and literacy levels, which enable Christian mothers to search for information from other sources [94][95][96]. Thus, the Christian mothers may be exposed to discourse on controversies surrounding vaccinations. The research findings also show that like the Christian mothers, secular Jewish mothers, who are in fifth place in vaccination uptake, vaccinate their children at lower rates than all the Arab subgroups. As indicated 1 Basic Law: Israel as the Nation-State of the Jewish People, informally known as the Nation-State Bill or the Nationality Bill, anchors the national Jewish values of the State of Israel in a Basic Law, after many such values were already anchored in other laws. This Basic Law specifies the nature of the State of Israel as the nation-state of the Jewish people, the place where the Jewish people has a natural right to selfdetermination, a right that is exclusive to the Jewish people. The law also anchors the status of the state flag and state emblem and of "Hatikva" as the state anthem. It determines the use of the Hebrew calendar and the holidays of Israel and states that Hebrew is the official state language. The law also states that Jewish immigration is to be encouraged, that Jerusalem, complete and united, is the capital of Israel, and that Arabic is not a state language but has a special status in the state.
by the current research, due to their high educational level, their high level of knowledge about vaccinations and their more hesitant attitudes toward vaccinations, Jewish mothers tend not complete their children's vaccination programs [2, 3, 7-9, 96, 97].
One of the more surprising findings of this study is related to uptake of the HPV vaccination among conservative population groups. The HPV vaccination is intended to prevent cervical cancer and genital warts caused by the human papillomavirus, which is transmitted through sexual relations. Arab society is considered to be a conservative and traditional society [29,84], particularly in the context of sexuality and sexual relations prior to marriage, which are a social taboo [40][41][42][43][98][99][100]. The findings of this study show that Arab mothers, without exception, vaccinate their children against the human papillomavirus at higher rates than Jewish mothers, despite the relationship between this vaccination and sexual activity. This finding can be explained by the lack of transparency that characterizes explanatory materials geared to increase awareness about the HPV vaccine among the Arab population. In another study in which we analyzed Arabic language explanatory materials issued by the Ministry of Health and the HMOs, we found that these materials did not refer to the sexual context of the vaccination, provided only partial information and were not culturally appropriate to Arab society [65]. Because Arab mothers are usually only exposed to information issued by the establishment and are unable to search for and process other information, it is reasonable to assume that they treat these materials as a reliable source of information and a basis for making decisions. Thus, promoting the HPV vaccine as preventing cancer serves to reframe the relationship between this vaccination and sexuality and increases the probability that the conservative Arab population will uptake the HPV vaccination. Religious Jewish society exhibits a cultural resemblance to Arab society in that it is also conservative and prohibits sexual relations before marriage. Nevertheless, the findings of this study show that the religious Jewish population differs from the Arab population with respect to vaccination uptake, as reflected in lower rates of HPV vaccination. These differences can be explained by the higher level of health literacy among religious Jewish mothers compared to Arab mothers, pointing to their greater ability to search for information and learn about the scientific controversy surrounding the vaccination and its association with sexuality, thus reducing their chances of HPV vaccination uptake [2,29,75].
This study was not designed to compare the Arab minority population in Israel to other Arab minorities worldwide regarding these two vaccinations. This issue should be the topic of future research.
This study has several limitations. First, the research was based on mothers' self-reports regarding their vaccination uptake, increasing the chances of report bias. Second, the study focused on the Arab population as the main research population and the Jewish population as a comparison group and did not examine subgroups in Jewish society. We recommend extending the study to the Jewish population and examining the decisionmaking processes regarding these two vaccinations among different Jewish subgroups. Moreover, additional research is warranted to examine mothers' decisionmaking with respect to various vaccinations, including identifying different variables that may have been associated with vaccination uptake over the years and detecting changes in vaccination trends, if any.
---
Conclusions
This pioneering research study reveals variations in vaccination uptake among different population subgroups. The study points to the important influence of variables related to trust, literacy and legitimacy of school vaccination. It also shows that all Arabs cannot be lumped together as one monolithic group. Indeed, they exhibit major differences Examining variables associated with uptake of the two vaccines can provide decision-makers an empirical basis for tailoring specific and appropriate interventions to each subgroup in order to achieve the highest vaccination uptake rate possible. The research also makes an important contribution to the literature on inequity in vaccination uptake as it exemplifies the variations within broad ethnic minority groups, which should be considered in policies and in practice. Moreover, media campaigns targeting the Arab population should be segmented to appeal to the various sub-groups according to their attitudes, needs and health literacy. The abilities and tools available to mothers must be reinforced so they can make intelligent decisions that are not based exclusively on trust in a third party such as the health or education system.
Vaccination hesitancy is on the rise worldwide, including in Jewish society in Israel. For this reason, it is important to take the public's feelings of hesitancy into consideration and to build trust in the medical system. Note that this research was conducted before the coronavirus crisis in Israel, and it is likely that the crisis has affected vaccination uptake in Arab society as well. Future research is therefore needed to continue investigating these subgroups to examine the impact of COVID-19 on their attitudes toward vaccinations and their vaccination uptake.
---
Availability of data and materials
Requests for more detailed information regarding the study should be addressed to the corresponding author.
---
Abbreviations HPV: Human papillomavirus; Flu: Influenza
---
Supplementary Information
The online version contains supplementary material available at https://doi. org/10.1186/s12939-021-01523-1.
Additional file 1. Research questionnaire.
---
Authors' contributions
NAES carried out this research as part of her PhD dissertation under the supervision of AGE and GSM. NAES conceptualized the study, reviewed the literature, conducted the data analysis, written the manuscript and took full responsibility for the study. AGE provided input on the study conceptualization, data analysis and writing the first drafts of the manuscript. GSM, ND, SBG and RG critically reviewed the manuscript and helped shape the final version of the manuscript. All authors approved the final manuscript.
---
Declarations
---
Ethics approval and consent to participate
This study was approved by the ethics committee of The Faculty of Social Welfare and Health Sciences at the University of Haifa (confirmation number 118/16). All the study participants gave their consent to participate in the research. The research does not provide any medical or personal information by which each participant can personally identified, thus ensuring anonymity.
---
Consent for publication
All the study participants gave their consent to publish the research.
---
Competing interests
The authors declare that they have no competing interests.
---
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 48,275 | 2,225 |
46d863b1e712e03f533605a49a1f9ba7f8d45e67 | Exposure to COVID-19 during the First and the Second Wave of the Pandemic and Coronavirus-Related PTSD Risk among University Students from Six Countries—A Repeated Cross-Sectional Study | 2,021 | [
"JournalArticle"
] | This study aimed to reveal differences in exposure to coronavirus disease during the first (W1) and the second (W2) waves of the pandemic in six countries among university students and to show the prevalence and associations between exposure to COVID-19 and coronavirus-related post-traumatic stress syndrome (PTSD) risk during W2. The repeated crosssectional study was conducted among university students from Germany, Poland, Russia, Slovenia, Turkey, and Ukraine (W1: n = 1684; W2: n = 1741). Eight items measured exposure to COVID-19 (regarding COVID-19 symptoms, testing, hospitalizing quarantine, infected relatives, death of relatives, job loss, and worsening economic status due to the COVID-19 pandemic). Coronavirus-related PTSD risk was evaluated by PCL-S. The exposure to COVID-19 symptoms was higher during W2 than W1 among students from all countries, except Germany, where, in contrast, the increase in testing was the strongest. Students from Poland, Turkey, and the total sample were more frequently hospitalized for COVID-19 in W2. In these countries, and Ukraine, students were more often in quarantine. In all countries, participants were more exposed to infected friends/relatives and the loss of a family member due to COVID-19 in W2 than W1. The increase in job loss due to COVID-19 was only noted in Ukraine. Economic status during W2 only worsened in Poland and improved in Russia. This was due to the significant wave of restrictions in Russia and more stringent restrictions in Poland. The prevalence of coronavirus-related PTSD risk at three cutoff scores (25, 44, and 50) was 78.20%, 32.70%, and 23.10%, respectively. The prediction models for different severity of PTSD risk differed. Female gender, a prior diagnosis of depression, a loss of friends/relatives, job loss, and worsening economic status due to the COVID-19 were positively associated with high and very high coronavirus-related PTSD risk, while female gender, a prior PTSD diagnosis, experiencing COVID-19 symptoms, testing for COVID-19, having infected friends/relatives and worsening economic status were associated with moderate risk. | Introduction
The novel coronavirus disease 2019 (COVID-19) caused by the SARS-CoV-2 virus has become a highly viral and infectious disease globally. The World Health Organization (WHO) [1] declared the COVID-19 pandemic on 11 March 2020. The pandemic is an unexpected, global phenomenon that has affected people not only by direct exposure to the disease but also indirectly via its various consequences, e.g., economic. The COVID-19 pandemic is the most profound global economic recession in the last eight decades [2]. Additionally, research shows that mental health problems associated with the pandemic extend to the general population and are not exclusively limited to individuals who have been infected [3]. Therefore, due to financial instability, the current pandemic can affect the mental health of individuals who are not at severe risk of becoming infected with COVID-19. The COVID-19 pandemic has considerably affected mental health. The review of mental health epidemiology indicates that a psychiatric epidemic cooccurs with the COVID-19 pandemic [4].
One group that is particularly susceptible to mental health deterioration during the ongoing pandemic is university students. Research has shown that student status (being a student) predicts mental health deterioration risk [5][6][7][8]. Thus, the education sector has been strongly disturbed by the COVID-19 pandemic [9]. The factors contributing to students' mental health issues in the pre-pandemic period are academic pressure [10] and financial obligations that may lead to poorer performance [11], and health concerns [12]. The additional risk factor of mental health problems is a young age. Even though young adults are less susceptible to COVID-19 infection [13], they are more susceptible to mental health issues during the ongoing pandemic [14 -16].
---
Post-Traumatic Stress Disorder (PTSD) and the COVID-19 Pandemic
Post-traumatic stress disorder (PTSD) is in the category of trauma-and stressor-related stress disorders [17]. The DSM-4 criteria for PTSD relating to exposure assumed that the person experienced or was confronted with an event involving actual or threatened death or serious injury or a threat to the physical integrity of one's self or others (A1) and second, that the person's response involved intense fear, helplessness, or horror (A2) [17]. However, in the DSM-5, significant changes have been introduced. The DSM-5 requires certain triggers, whether directly experienced, witnessed, or happening to a close family member or friend, but exposure through media is excluded unless the exposure is work-related. In addition, the second criterion of subjective response (A2) has been removed [18].
Pandemics are classified as natural disasters. PTSD is one of the most-studied psychiatric disorders and is related to natural disasters [19]. However, the DSM-5 definition notes that a life-threatening illness or debilitating medical condition is not necessarily a traumatic event. Therefore, there is a claim that exposure to the COVID-19 pandemic cannot be treated as a traumatic experience causing PTSD due to the new criteria in the DSM-5 [20]. There is an ongoing debate regarding the possibility of the anticipatory threat of the COVID-19 pandemic to be a traumatic experience and, therefore, the possibility of psychological responses coherent with PTSD [21]. Additionally, recent research [22] strongly supports this claim and emerging research in this area. Following that research, we recognize the COVID-19 pandemic as a traumatic stressor event that can cause a PTSD-like response.
Probable PTSD related to the pandemic ranges from 7% to even 67% in the general population [20]. A meta-analysis of 14 studies conducted during the first wave of the pandemic, between February and April, revealed a high rate of PTSD (23.88%) in the general population [23]. The prevalence rate of PTSD in students presents a wide range of variety. In the group of home-quarantined Chinese university students (n = 2485) one month after the breakout, the prevalence was 2.7%. However, Chi et al. [24] revealed that in a sample of Chinese students (n = 2038), the prevalence of clinically relevant PTSD reached 30.8% during the pandemic. Among a large sample of French university students (n = 22883), the rate of probable PTSD one month after the COVID-19 lockdown was 19.5% [25].
The predictors of PTSD in the Chinese university student sample were older age, knowing people who had been isolated, higher level of anxious attachment, adverse experiences in childhood, and lower level of resilience. However, gender, family intactness, subjective socioeconomic status (SES), and the number of confirmed cases of COVID-19 in participants' areas turned out to be irrelevant predictors [24]. Previous research showed that typically, women show higher rates of PTSD than men [26]. PTSD usually occurs almost twice as much in women compared to men [27]. This was also proven after natural disasters (earthquakes) among young adults [28]. However, gender role in PTSD prevalence was not confirmed during the COVID-19 pandemic. The meta-analysis showed that gender was not a significant moderator of PTSD [23]. Additionally, there is strong evidence that prior mental health disorders, particularly anxiety and depression, are predictors of PTSD [29]. Furthermore, previous exposure to traumatic events is a risk factor for PTSD [30].
The research showed a significant association between exposure to COVID-19 and the severity of PTSD symptoms in university student samples [25,31]. General exposure to COVID-19 turned out to be a significant risk factor for anxiety in Czech, Polish, Turkish, and Ukrainian university students while irrelevant for anxiety in Colombian, German, Israeli, Russian, and Slovenian students during the first wave of the pandemic [32]. The same study showed that also depression risk is associated with general exposure to COVID-19 among university students from the Czech Republic, Israel, Russia, Slovenia, and Ukraine. However, in Colombia, Germany, Poland, and Turkey, the exposure was irrelevant to depression risk among university students [32].
In the present study, we will refer to university students from six countries: Germany, Poland, Russia, Slovenia, Turkey, and Ukraine between the first wave (May-June 2020) (W1) and the second wave (mid-October-December 2020) (W2) of the COVID-19 pandemic. The countries in our study represent the cultural diversity depicted by traditional vs. secular and survival vs. self-expression values. The Inglehart-Welzel World Cultural Map [33] aggregates all countries into eight clusters based on the dimensions of those values. Four out of eight value clusters are exemplified in our study. Protestant Europe is represented by Germany; Catholic Europe by Poland and Slovenia; Orthodox Europe by Ukraine and Russia; and the African-Islamic region by Turkey. Therefore, these countries represent a great diversity of global cultural values.
To present the ongoing pandemic situation in each of the six countries, we refer to the Oxford COVID-19 Government Response Tracker (OxCGRT), which enables tracking the stringency of government responses to the COVID-19 pandemic across countries and time [34]. The mean stringency index value varied in the W1 varied between 47.91 in Slovenia and 82.64 in Ukraine. During the W2, the lowest index was observed in Russia (44.80), while the highest was in Poland (75.00). The greatest increase of the OxCGRT was noted in Slovenia, while the greatest decrease of the index was in Ukraine. The detailed description of the stringency of restrictions in six countries during W1 and W2 is shown in Figure 1a. Since the national restrictions mainly refer to closing workplaces and economic measures, we assumed that in the countries that significantly waved the restrictions during W2 (e.g., Russia), the portion of university students who reported exposure to the COVID-19 pandemic in terms of losing a job and deterioration of the economic status would be lower during W2. We also analyzed the mean number of daily new cases and deaths based on an interactive web-based dashboard to track COVID-19 [35] (mean of the first and the last day of conducting the study in each country during the first and the second wave). The data on the mean number of daily cases presented in Figure 1b and on the mean number of deaths in Figure 1c show that in four countries (Germany, Russia, Turkey, and Ukraine), despite the higher number of daily cases and deaths due to COVID-19 during W2, the restrictions decreased. The largest increase in daily cases and deaths during W2 compared to W1 was noted in Poland, Russia, Turkey, and Ukraine. Our following hypothesis was that in countries with a higher number of cases and deaths during W2, the proportion of students reporting higher exposure to COVID-19 (symptoms, testing, hospitalizing, being in a strict 14-day quarantine, having infected friends/family, and experiencing death of friends/relatives) in W2 would be higher compared to W1. The main aim of this study was to verify the differences in the exposure to the COVID-19 pandemic in university students from Germany, Poland, Russia, Slovenia, Turkey, and Ukraine between the first wave (W1) and the second wave (W2) of the COVID-19 pandemic. We expected significant differences in various aspects of exposure to COVID-19 dependent on country, which might be interpreted in the context of stringency of restrictions and the number of daily cases and deaths due to the coronavirus.
In this study, we acknowledge the COVID-19 pandemic as a traumatic stressor event that can cause a PTSD-like response. The second aim is to reveal whether different aspects of exposure to COVID-19 (symptoms, testing, hospitalizing, being in quarantine, having infected friends/family, experiencing the death of friends/relatives, losing a job, worsening of economic status), including previous diagnosed mental health problems (depression, anxiety, PTSD) and gender predict coronavirus-related PTSD severity risk in international samples of university students from six countries during W2.
This study fills the gap in the literature related to the link between exposure to the COVID-19 pandemic and coronavirus-related PTSD during the second wave of the pandemic among students from six countries.
---
Materials and Methods
---
Participants
The required sample size for each country group was computed a priori using the G*Power software (Düsseldorf, Germany) [36]. To detect a medium effect size of Cohen's W = 0.03 with 95% power in a 2 × 2 χ 2 contingency table, df = 1 (two groups in two categories each, two-tailed), α = 0.05, G*Power suggests we would need 145 participants in each country group (non-centrality parameter λ = 13.05; critical χ 2 = 3.84; power = 0.95). All the respondents were eligible for the study and confirmed their student status (being a current university student).
The cross-sectional study was conducted in six countries with a total of 1684 students during the first wave of the pandemic-in Germany (n = 270, 16%), Poland (n = 300, 18%), Russia (n = 285, 17%), Slovenia (n = 209, 13%), Turkey (n = 310, 18%), and Ukraine (n = 310, 18%)-and a total of 1741 during the second wave, in Germany (n = 276, 16%), Poland (n = 341, 20%), Russia (n = 274, 15%), Slovenia (n = 206, 12%), Turkey (n = 312, 18%), and Ukraine (n = 332, 19%).
The total sample of German students was recruited from University of Bamberg during the first measurement (W1) (n = 270, 100%) and the second measurement (W2) (n = 276, 100%). The Polish sample during W1 consisted of 300 students recruited from Maria Curie-Sklodowska University (UMCS) in eastern Poland (n = 149, 49%) and from University of Opole (UO) in the south of Poland (n = 151, 51%). During W2, Polish sample was comprised of 341 students from the same universities: UMCS (n = 57, 17%) and UO (n = 284, 83%). There were 285 Russian students in W1 and 274 in W2. Russian students were recruited from universities located in Sankt Petersburg: Peter the Great St. Petersburg Polytechnic University (W1: n = 155, 54%; W2: n = 156, 54%), Higher School of Economics (HSE) University (W1: n = 90, 31%; W2: n = 39, 14%), and St. Petersburg State University of Economics and Finance (W1: n = 42, 15%; W2: n = 78, 29%). The total sample in Slovenia was comprised of students recruited from University of Primorska in Koper during W1 (n = 209, 100%) and W2 (n = 206, 100%). During W1, Turkish students were recruited from eleven Turkish universities, mostly located in eastern Turkey: Bingol University, Bingöl (n = 148, 48%); Atatürk University, Erzurum (n = 110, 35%); Mu gla Sıtkı Koçman University, Mu gla (n = 35, 11%); A grı ˙Ibrahim Çeçen University, A grı (n = 6, 2%); Fırat University, Elazı g (n = 3, 0.8%); Kırıkkale University, Kırıkkale (n = 1, 0.3%); Adnan Menderes University, Aydın (n = 1, 0.3%); Başkent University, (n = 3, 1%); Bo gaziçi University (n = 1, 0.3%), Dicle University, Diyarbakır (n = 1, 0.3%), and Istanbul University (n = 1, 0.3%). During W2, Turkish students were recruited from seven Turkish universities: Atatürk University, Erzurum (n = 110, 35%); A grı ˙Ibrahim Çeçen University, A grı (n = 71, 23%); Bingol University, Bingöl (n = 57, 18%); I gdır University, I gdır (n = 26, 8%); Mu gla Sıtkı Koçman University, Mu gla (n = 20, 7%); Başkent University, (n = 16, 5%); and Bursa Uluda g University, Bursa (n = 12, 4%). Ukrainian students represented Lviv State University of Physical Culture (W1: n = 310, 100%; W2: n = 332, 100%;).
Female students constituted 70% of the sample (n = 1174) during W1 and 73% (n = 1275) during W2. The majority of the participants lived in rural areas and small towns in W1 (n = 1021, 61%) and in W2 (n = 1029, 59%). Most of students were at the first cycle studies (bachelors' level) (W1: n = 1269, 75%; W2: n = 1324, 76%). The average age was 22.80 (SD = 4.65) in W1 and 22.73 (SD = 3.86) in W2. The median of age was 22.
Students reported prior professional diagnosis of depression (n = 356, 20.40%), anxiety (n = 287, 16.50%), and PTSD (n = 205, 11.80%). The data regarding previous diagnosis in Germany were not collected due to an electronic problem.
The sociodemographic profiles of the participants in W1 and W2 are highly similar and comparable. Detailed descriptive statistics and previous diagnoses of depression, anxiety, and PTSD for each country during W1 and W2 are presented in Table 1.
All the questions included in the Google Forms questionnaire were answered in Poland, Slovenia, Czechia, Ukraine, and Russia. In those countries, participants could not omit any response; therefore, there were no missing data. However, in the German sample, the study was conducted via SoSci Survey, and there were missing data (n = 5, 0.02%). Therefore, hot-deck imputation was introduced to deal with a low number of missing data in the German sample.
---
Study Design
This repeated cross-sectional study among students from Germany, Poland, Russia, Slovenia, Turkey, and Ukraine was conducted during the first wave (W1) (May-June 2020) and the second wave (W2) (mid-October-December 2020) of the pandemic. The first measurement (W1) results concerning depression and anxiety have been already carefully described in a previous publication [32].
A cross-national first measurement was conducted online between May and June in the following countries: Germany (2-25 June), Poland (19 May-25 June), Russia (01-22 June), Slovenia (14 May-26 June), Turkey (16-29 May), and Ukraine (14 May-02 June). The second measurement during W2 was conducted between mid-October and December 2020 in Germany (15 October-1 November), Poland (11 November-1 December), Russia (28 October-8 December), Slovenia (10 October-15 December), Turkey (18 November-8 December), and Ukraine (15 October-15 November). The survey study was conducted via Google Forms in all countries except Germany. This country exploited the SoSci Survey. The invitation to participate in the survey was sent to students by researchers via various means, e.g., Moodle e-learning platform, student offices, email, or social media. The average time of data collection was 23.26 min (SD = 44.03). In Germany, students were offered a possibility to enter the lottery for a 20 EUR Amazon gift card in W1 and 50 EUR in W2 as an incentive to participate. No form of compensation was offered as an incentive to participate in the five other countries. To minimize bias sources, the student sample was highly diversified regarding its key characteristics: the type of university, field of study, and the cycle of study. Sampling was purposive. The selection criterion was university student status. The study followed the ethical requirements of the anonymity and voluntariness of participation.
---
Measurements 2.3.1. Sociodemographic Survey
Demographic data included questions regarding gender, place of residence (village, town, city, agglomeration), the current level of study (bachelor, master, postgraduate, doctoral), field of study (social sciences, humanities, and art, natural sciences, medical and health sciences), the year of study, and the study mode (full-time vs. part-time). The questionnaire was primarily designed in Polish and English. In the second step, it was translated from English to German, Russian, Slovenian, Turkish, and Ukrainian using backward translation by a team consisting of native speakers and psychology experts according to guidelines [37]. The participants were asked about their previous medical conditions regarding depression, anxiety, and PTSD diagnosed by a doctor or other licensed medical provider. The answer 'yes' was coded as 1, 'no' as 0.
---
Self-Reported Exposure to COVID-19
Exposure to COVID-19 [38] was assessed based on eight questions regarding the COVID-19 pandemic in terms of (1) symptoms that could indicate coronavirus infection;
(2) being tested for COVID-19; (3) hospitalization due to COVID-19; (4) experiencing strict quarantine for at least 14 days, in isolation from loved ones due to COVID-19;
(5) coronavirus infection among family, friends, or relatives; (6) death among relatives due to COVID-19; (7) losing a job due to the COVID-19 pandemic-the person or their family; and (8) experiencing a worsening of economic status due to the COVID-19 pandemic. Participants marked their answers to each question, coded as 0 = no, and 1 = yes. Each aspect of the exposure to COVID-19 was analyzed separately. The self-exposure to COVID-19 items was developed based on methodology proposed by Tang et al. [31].
---
Coronavirus-Related PTSD
The coronavirus-related PTSD was assessed using the 17-item PTSD check list-specific version (PCL-S) [39] on a five-point Likert scale ranging from 1 = not at all to 5 = extremely, with the total score ranging from 17 to 85. Higher scores indicated higher PTSD levels. A lower cutoff score (25) [40] is used for screening reasons. However, higher cutoff points (44) and ( 50) [41] are dedicated to minimalizing false positives or diagnoses.
We have used PCL-S based on the DSM-4, as we wanted to be sure that we measure coronavirus-related PTSD. The specific stressful-event-related PTSD was acknowledged as the COVID-19 pandemic. Therefore, we have utilized the specific version and asked about symptoms in response to a specific stressful experience: the COVID-19 pandemic. We have also added the COVID-19 pandemic aspect to each of the items. Participants estimated how much they were bothered by this specific problem (the COVID-19 pandemic) in the past month. Therefore, we have not explored general PTSD but specific stressful-event-related PTSD. The Cronbach's α in the total sample in this study was 0.94.
---
Stringency Index
We used the Oxford COVID-19 Government Response Tracker (OxCGRT) to portray the stringency of government responses to the COVID-19 pandemic across countries and time [34]. The stringency level is composed of various indicators. It refers to community mobility: restrictions on gathering, workplace closings, public school closings, cancelation of public events, stay at home requirements, transport closings, international travel restrictions, restrictions on internal movement, and economic measures: fiscal measures, income support, debt/contract relief, and international support. The indices regarding public health issues are: testing policy, public information campaigns, contact tracking, investments in vaccines, emergency investment in health care, vaccination, and facial coverings. The stringency of government responses is the reaction to the pandemic spread in each country. Those measurements are rescaled to a value ranging from 0 to 100, where 100 denotes the strictest restrictions. The timing was crucial for the stringency-level evaluation. The stringency rate in this study was calculated based on the stringency value mean in the first and the last day of data collection in each country. This index portrays the pandemic situation for the general population in each country well.
---
Statistical Analysis
The statistical analysis included descriptive statistics: mean (M), standard deviation (SD), and 95% of confidence interval (CI) with lower limit (LL) and upper limit (UL). The analysis was conducted in SPSS27. To verify the first hypothesis regarding the change in exposure to COVID-19, we have utilized the Pearson χ 2 independence test for each country and each aspect of exposure to COVID-19 separately using a 2 × 2 contingency table. Phi (ϕ) value was used to assess the effect size [42]. An effect size equal to 0.1 is considered a small effect, 0.3 a medium effect, and 0.5 a large effect. We have shown the prevalence rate for coronavirus-related PTSD. The following step was to verify whether the various aspects of the COVID-19 pandemic exposure are associated with coronavirusrelated PTSD in university students. We conducted multivariate logistic regression analysis for the coronavirus-related PTSD risk among the international student sample from the six countries. All predictors were entered into the model simultaneously. The multiple regression models reveal risk factors in their simultaneous effect on mental health. Therefore, the multivariate regression model is closer to actual psychological complexity than the bivariate model, where the particular factors independently predict mental health issues.
---
Results
The Person's χ 2 independence test showed a significant difference between measurement during W1 (May-June 2021) and W2 (mid-October-November) in each of the six countries regarding the various aspects of self-reported exposure to COVID-19. The φ coefficient value allowed for the assessment of the effect size [42].
---
Comparison of Self-Reported Exposure to the COVID-19 Pandemic
A significantly higher proportion of students experienced symptoms of coronavirus infection during the second wave in the total international sample of university students. However, the effect size was small. Similarly, in Poland, Russia, Slovenia, and Turkey, the proportion of students experiencing COVID-19 symptoms was significantly higher in W2, although the effect size was small. A significant medium effect size was noted in Ukraine. Therefore, the most pronounced increase in the proportion of students experiencing the COVID-19 symptoms during the second wave was observed in Ukraine. However, the one country where there was no significant effect was Germany. Therefore, the university students in Germany did not experience higher exposure to the infection in the second wave, unlike all other students from the five countries.
However, a significant medium effect sized was observed in German students regarding testing for coronavirus. In all other countries and the total sample, the effect was also significant but small. Therefore, all university students reported a higher number of tests in W2, but the difference was the highest in Germany.
The exposure to being hospitalized for coronavirus was relatively small. Only five participants (0.30%) in W1 and 21 (1.21%) answered yes to this question in the total sample. However, the difference was significant. A significantly higher proportion of students was hospitalized in Poland and Turkey during W2, although the effect size was small. In Germany, Russia, Slovenia, and Ukraine, the difference was insignificant.
A higher proportion of students experienced being in a strict quarantine during W2 than W1 in Poland, Turkey, Ukraine, and the total sample. However, in Germany, Russia, and Slovenia, the differences were trivial.
In all countries and the total international sample, the exposure to friends or relatives infected with the COVID-19 was higher during W2 than W1. A large significant effect was observed in Turkey, a medium effect in Ukraine and the total sample, while a small effect was observed in Germany, Poland, Russia, and Slovenia.
Similarly, the proportion of students who experienced a loss of friends or relatives due to the COVID-19 significantly increased during W2 compared to W1. The medium effect was observed in Turkey, while a small effect was prevalent in all other countries and the international sample.
The proportion of students who experienced losing a job due to the COVID-19 pandemic was lower during W2 than W1 in the international sample and Ukraine. However, in other countries, the effect size was small. There was no significant drop in Germany, Poland, Russia, and Turkey.
Mixed results were observed regarding the self-reported deterioration of economic status due to the pandemic. In the total sample, the difference between W1 and W2 was trivial. However, an increase in the proportion of students declaring that their economic status worsened was observed in Poland. On the other hand, there was a significant drop in the proportion of students claiming worse economic status during W2 in Russia. All effects were small regarding this aspect of exposure. There were no significant differences in Germany, Slovenia, Turkey, and Ukraine. The results of the comparison are shown in Table 2.
---
Descriptive Statistics and Prevalence of Coronavirus-Related PTSD
Descriptive statistics showed that the mean value of coronavirus-related PTSD was 38.08 (SD = 15.49) among students from Germany, Poland, Russia, Slovenia, Turkey, and Ukraine during W2. A detailed description is presented in Table 3. Note. M = mean; CI = confidence interval; LL = lower limit; UL = upper limit; SD = standard deviation.
The prevalence of coronavirus-related PTSD risk was presented at three cutoff points, according to the recommendations in the presented literature [40,41]. The proportion of students with coronavirus-related PTSD risk at three cutoff scores (25, 44, and 50) is presented in Table 4.
---
Logistic Regression for Coronavirus-Related PTSD Risk
Multivariate logistic regression for coronavirus-related PTSD risk during the second pandemic wave showed significant models for a moderate, high, and very high risk of PTSD among an international sample of university students from Germany, Poland, Russia, Slovenia, Turkey, and Ukraine. The predictors were eight aspects of self-reported exposure to COVID-19 controlling for gender and previous clinical diagnosis of depression, anxiety disorder, and PTSD. All predictors were included simultaneously using the enter method. Results are presented in Table 5.
The model of moderate risk of coronavirus-related PTSD (Cutoff Point 25) revealed only three predictors to be relevant among eight items describing exposure to the coronavirus pandemic: experiencing COVID-19 symptoms (Item 1), COVID-19 infection among friends and family (Item 5), and the deterioration of economic status due to the pandemic (Item 8). Students who experienced COVID-19 symptoms and whose family or friends were infected had 1.5 times higher odds of moderate risk of PTSD. Those who reported worsening economic status due to the pandemic were almost two and half times more frequently in the moderate PTSD risk group. In addition, female students were two times more likely to develop moderate PTSD. Coronavirus-related PTSD was three times more likely among students with a previous clinical diagnosis of PTSD.
The regression models for high and very high risk of PTSD revealed a different set of predictors. In those two models, the significant predictors were the same with similar adjusted odds. Students who had a family member or friend die from coronavirus infection were twice as likely to be in a coronavirus-related PTSD-risk group. Additionally, students exposed to the COVID-19 pandemic in terms of losing a job (own or in the one's family) and worsening economic status were 1.6 times and 1.8 times more likely to be in a (very) high coronavirus-related PTSD-risk group, respectively. Finally, worsening of economic status was a significant predictor of high and very high risk of PTSD. Among demographic factors, female gender and previous diagnosis of depression and PTSD were associated with a twofold higher risk of coronavirus-related PTSD.
---
Discussion
In this study, we showed the significance of differences in aspects of exposure to the COVID-19 pandemic in university students from Germany, Poland, Russia, Slovenia, Turkey, and Ukraine between the first wave (W1) and the second wave (W2) of the COVID-19 pandemic with regard to the stringency index. We also showed the prevalence and predictors of coronavirus-related PTSD. To the authors' knowledge, this is the first study undertaking this theme among university students from eight countries during W2.
Our study revealed the differences in exposure to COVID-19 among university students in Germany, Poland, Russia, Slovenia, Ukraine, and Turkey during W1 (April-May 2020) and W2 (October-December 2020). The prevalence of coronavirus-related PTSD risk for 25, 44, and 50 cutoff scores was 78.20%, 32.70%, and 23.10%, respectively, during W2. We have also performed the prediction models of coronavirus-related PTSD risk for each cutoff score in the international sample of university students during W2.
We expected that in countries such as Russia, where the restrictions were significantly waved during W2, the worsening of economic status and job loss due to the COVID-19 pandemic would significantly decrease. The mean stringency of restrictions in the six countries was lower during W2 compared to W1. However, the ratio of students in the international sample who have lost a job during W2 was significantly lower compared to W1. In contrast, the ratio of students whose economic status worsened due to the pandemic was not significantly different during W2. Therefore, the most significant job loss experience by a student or a family member was more evident during W1 (31%) than W2 (25%). However, the deterioration of economic status was still on the rise even during W2 (although insignificant) and concerned over half of the international student sample (55%). The lowest proportion of students exposed to worsening economic during W2 was noted in Germany (29.92%), while the highest (over 50%) in Poland, Ukraine, and Turkey, at 72.14%, 70.41%, and 63.78%, respectively. In contrast, the proportion of French students who reported a loss of income was significantly lower and reached only 18.30% in June-July 2020 [25]. In accordance with our expectations, the rate of students who experience worsening economic status due to the pandemic was significantly lower in Russia during W2 due to the significant wave of the restrictions, whereas it was higher in Poland, where the restrictions were more stringent.
In congruence with Hypothesis 2, exposure to COVID-19 among the total sample of students has risen. During W2, a higher proportion of students in all countries reported experiencing symptoms of COVID-19 compared to W1, except Germany, even though the number of new cases daily was almost 20 times higher during W2 (n = 7762) than during W1 (n = 392) in the general German population. On the other hand, the difference in the frequency of testing for COVID-19 was the largest in the German sample. Therefore, although the ratio of German students who experienced having infected friends/family or losing a loved one was higher during W2, the portion of German students who experienced COVID-19 has not increased. It might be due to the significant increase in testing among German students.
There was significant growth in the percentage of hospitalized students in strict quarantine in Poland and Turkey. Additionally, in Ukraine, the ratio of students in a compulsory 14-day quarantine was elevated during W2. In congruence with the numbers in the general population, the percentage of students who experience losing a family member or friends due to COVID-19 was higher in all countries. However, the largest increase of daily coronavirus-related deaths was among the Polish and Russian general populations. In contrast, among the student population, the highest increase was declared in Turkey. Similar to previous research among Turkish students [43], it would seem that the student sample was overexposed to the bereavement experience. However, there were concerns regarding the reliability of COVID-19 data in Turkey, as it appeared that the prevalence of the disease (particularly total deaths) might be underreported [44,45].
The mean for the coronavirus-related PTSD risk in the international sample of students from six countries in this study has exceeded the lowest cutoff score (25), which is used for screening reasons [40]. The prevalence at this cutoff point was very high and indicated that over 78.20% of students are at coronavirus-related PTSD risk in this study. Every third student (32.70%) is at high PTSD risk (Cutoff Point 44), and almost every fourth student (23.10%) is at a very high PTSD risk (Cutoff Point 50). The high cutoffs are used to minimalize false positives or diagnoses [41]. The prevalence of PTSD risk at the beginning of the first wave of the COVID-19 pandemic in young adults in the USA [46] and China [16] with the use of PCL-C was 32% (Cutoff Point 44) and 14% (Cutoff Point 38), respectively. Research with the use of the PCL-5 at Cutoff Point 32 in the general population showed a total of 7% of people experiencing post-traumatic stress symptoms in the Chinese sample (January/February, cutoff score-33) [47] and 13% in five western countries [22]. However, the Italian general sample, using a modified 19-item PCL-5-based-PTSD questionnaire, revealed a total of 29% of people experiencing PTSD symptomatology [48]. The highest prevalence (67% demonstrating high PTSD level) was in a general Chinese population, with a different measurement (IES-R) [49]. Various measurements and cutoff scores hinder the comparison to our sample. Additionally, the presented studies were conducted during the first wave of the pandemic. However, referring to the specific cutoff score (44), the prevalence of coronavirus-related PTSD risk was similar in the student sample in our study (33%) during the second wave of the pandemic among young adults in the USA (32%) [46]. On the other hand, the used PCL-C version was general and did not refer to COVID-19 as a specific stressful event [46], such as in our study. In contrast, a single-arm meta-analysis [50] of 478 papers and 12 studies showed that the prevalence of PTSD in the general population during the COVID-19 pandemic was 15%; therefore, it was significantly lower than among students in this study.
There are inconsistent data regarding the prevalence of PTSD in the student population. In French university students one month after the COVID-19 lockdown, the prevalence of PTSD risk measured by the PCL-5 (Cutoff Score 32) was 19.50% [25]. Among Chinese college students, using the abbreviated PCL, conducted in February 2020, the prevalence was 31% [24]. The smallest prevalence, reaching 2.7%, was noted in Chinese university students [31]. The measurement in this study was PCL-C, with a cutoff score of 38. The repeated cross-sectional research among French students revealed that 16.40% of students developed probable PTSD in the second measurement. The increase in the second measurement [25] can explain the high prevalence at the screening level (Cutoff Point 25) in our sample (78.20%).
The prediction models for coronavirus-related PTSD risk differed due to the severity of risk regarding the exposure to experiencing symptoms of COVID-19, testing for COVID-19, and infection of friends or family members. In the prediction model of moderate PTSD risk (Cutoff Point 25), these were important factors, while in the more severe PTSD risk models (Cutoff Points 44 and 50), they were irrelevant. The following significant predictors for the more severe PTSD risk models were experiencing symptoms of COVID-19, losing a family member or friends because of COVID-19, job loss (by the participant or family member), and worsening of economic status due to the COVID-19 pandemic. However, experiencing the loss of a friend or family member and job loss were not relevant predictors for moderate coronavirus-related PTSD risk. Testing and hospitalization for COVID-19, as well as being in strict 14-day quarantine, were not significantly associated with coronavirus-related PTSD risk in any model. The results are similar to research among Chinese students [31], where longer home quarantine was not associated with PTSD. However, in the French university sample, having lived through quarantine alone was a significant factor associated with probable PTSD [25]. The lack of association of quarantine experience with PTSD risk in this study can be due to the low proportion of exposed students (11%).
Prior medical diagnosis reported by students regarding depression was associated with high and very high coronavirus-related PTSD. Prior PTSD diagnosis was associated with a moderate and very high risk of coronavirus-related PTSD in the international sample.
These results are aligned with previous findings [30]. However, prior anxiety diagnosis did not turn out to be relevant for PTSD risk in this study.
Contrary to other research [23,24] showing insignificance of gender as a PTSD moderator among young adults during the COVID-19 pandemic, we found that female students were twice as likely to develop moderate, high, or very high coronavirus-related PTSD risk. A similar assessment of PTSD risk was recognized in previous research [26,27] regarding natural disasters [28]. This inconsistency might be due to the time of the study, as the previous research shows results from the first wave of the pandemic, whereas, in our study, results come from the second wave. Due to the longer period, gender differences might be more pronounced among students.
---
Limitations
There are some limitations to the present study. First, the study is of a repeated cross-sectional character and is not longitudinal. Second, the study utilized self-report questionnaires. Therefore, the results might be subject to retrospective response bias. Additionally, the research sample is convenient. The lack of representation of the student population limited to specific regions in each country seem to be a burden in generalizing the results, particularly in the Turkish case, where the majority of students come from a highly volatile region of Eastern Turkey. Additionally, we utilized the PCL-S based on the DSM-4 instead of the PCL-5 based on DSM-5. However, the PCL-S enables the measurement of PTSD with regard to a specific stressful experience: the COVID-19 pandemic. The majority of participants were female students (70%); however, this balance reflects the real gender balance in most of the surveyed countries, where the percentage of female students reaches 60% [51][52][53][54].
Considering the limitations and strengths of this study, future research directions should be the study of exposure and coronavirus-related PTSD from a cross-cultural perspective with longitudinal design in a representative sample. It should be noted that this study was conducted before introducing open public vaccination programs. We could expect that access to vaccination will mitigate the negative psychological aspect of the COVID-19 pandemic. However, students have ambivalent attitudes towards vaccination programs, particularly non-medical students [55]. Therefore, this access might also be a source of psychological distress in the future.
---
Conclusions
This study shows that, besides exposure to COVID-19 symptoms, the loss of relatives because of COVID-19, female gender, and a prior diagnosis of a mental health disorder, the economic aspect of the pandemic plays a vital role in the susceptibility to high coronavirus-related PTSD risk. Even though the proportion of students who have experienced worsening economic status has not increased during W2, it still considered over half of the student sample from six countries in this study. Therefore, additional financial support for students could mitigate coronavirus-related PTSD risk, particularly in Poland, Ukraine, and Turkey.
The analysis of the federal restrictions' stringency shed light on an increase of worsening economic status in Poland (where the restrictions were more stringent) and a decrease in Russia, where the restrictions were waived despite a high number of new daily cases. The German case shows the importance of frequent testing; however, this research was conducted before open public access to the COVID-19 vaccine.
---
Data Availability Statement:
The materials and methods are accessible at the Center for Open Science (OSF), titled: Mental Health of Undergraduates During the COVID-19 Pandemic [56]. The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
---
Conflicts of Interest:
The authors declare no conflict of interest. | 41,981 | 2,133 |
c23301af67e11cb736f70b246507938b2ab63459 | Unveiling the Role of Zoos in Smart Cities: A Quantitative Analysis of the Degree of Smartness in Kyoto City Zoo | 2,023 | [
"JournalArticle"
] | The rapid pace of urbanization and the emergence of social challenges, including an aging population and increased labor costs resulting from the COVID-19 pandemic, have underscored the urgency to explore smart city solutions. Within these technologically advanced urban environments, zoos have assumed a pivotal role that extends beyond their recreational functions. They face labor cost challenges and ecological considerations while actively contributing to wildlife conservation, environmental education, and scientific research. Zoos foster a connection with nature, promote biodiversity awareness, and offer a valuable space for citizens, thereby directly supporting the pillars of sustainability, public engagement, and technological innovation in smart cities. This study employs a quantitative analysis to assess the alignment between smart projects and the distinctive characteristics of Kyoto Zoo. Through questionnaires, we collected feedback on performance and importance, and subsequently employed the analytic hierarchy process and the fuzzy integrated evaluation method to obtain quantitative results. The findings reveal the high level of intelligence exhibited by Kyoto Zoo, and the analysis provides insightful guidance that can be applied to other urban facilities. At the same time, we compared Kyoto Zoo with Ueno Zoo to see the difference in intellectualization achievements in different contexts in terms of data and systems. | Introduction 1.Background
The rapid expansion of urban areas, coupled with advances in technology and the need to improve citizens' living conditions and well-being, has placed greater emphasis on the role of landscapes in the city. In response to these challenges, the "Smart City" concept has emerged, drawing on the notion of a "Smart Earth" first introduced by IBM in a thematic report in 2008 [1]. A smart city is a modernized urban environment that leverages diverse electronic methods and sensors to collect specific data, intending to manage assets, resources, and services effectively and ultimately enhance overall city operations [2]. Integrating information and communication technology (ICT) and Internet of Things (IoT) technologies into smart cities has enabled greater information transparency and digitalization of city life, empowering citizens with the tools and data they need to make informed choices on a daily basis.
The concept of "Smart" (Japanese for 'sumāto-ka') has garnered significant attention in urban development and intellectualization, as reflected by its widespread adoption across various industries. Key terms such as ICT, IoT, artificial intelligence (AI), and 5G are now firmly entrenched in the public consciousness. The growing prevalence of "Smart Cities" necessitates using the IoT as an information network platform, enabling the efficient collection and processing of big data. The concept of smart cities goes beyond traditional urban development, aiming to optimize city operations and enhance the quality of life for citizens. By leveraging information and communication technology (ICT) and the Internet of Things (IoT), smart cities enable effective management of assets, resources, and services. Integrating ICT and IoT technologies empowers citizens with real-time data and tools, fostering information transparency and digitalization in various aspects of urban life. These advancements provide the foundation for a smarter and more efficient urban environment.
With urban development on the rise, there has been a surge in the construction of "Smart Parks", such as Haidian Park [3], Longhu G-PARK Science Park [4] in Beijing, China, Xiangmi Park [5] in Shenzhen, China, Arashiyama Park (Nakanoshima area) [6], and The Keihanna Commemorative Park [7] in Kyoto, Japan, and the Palace Site Historical Park [8] in Nara, Japan. These parks represent an innovative approach to providing citizens with better green spaces. Within this context, our study focuses on a specific type of park, namely zoos. Zoos play a vital role in developing smart cities by serving as integral components of urban landscapes. These institutions contribute to cities' overall well-being and sustainability by providing green spaces, wildlife conservation efforts, and opportunities for education and research. In the context of smart cities, zoos act as catalysts for sustainable development and success, aligning with the core principles and objectives of these technologically advanced urban environments. The more targeted visitor traffic and richer ecological environments in zoos make their intellectualization more impactful and meaningful.
Beyond their role as recreational spaces, zoos fulfill critical functions such as wildlife conservation, environmental education, and scientific research. These activities directly contribute to the sustainable development, public engagement, and technological innovation aspects of smart cities. Zoos not only provide citizens with opportunities to connect with nature but also serve as platforms for raising awareness about biodiversity and environmental sustainability.
The COVID-19 pandemic has significantly impacted zoos worldwide, with many facing operational and financial difficulties. The decrease in visitor numbers, which is one of the main sources of revenue for zoos, has severely impacted their operations. In addition, the increased costs of maintaining the animals and providing them with food and other necessities have also contributed to the financial difficulties that zoos face. To cope with these challenges, zoos have implemented cost-cutting measures, reducing staff, animal collections, and conservation programs. In Japan, for example, in 2020, feed costs at Tobu Zoological Park [9] increased by ~5-6%. In a 2021 survey conducted by NHK, 97% of zoos in Japan said they had closed temporarily during the prior year [9]. Since then, admission revenues in tourism in Japan have decreased to a staggering number due to a sharp decline in inbound visitors from overseas [10]. It can be seen from this that supporting and sustaining zoos during crises is crucial, given their significant contributions to animal conservation, education, and research. It is imperative to find ways to overcome these challenges and ensure the long-term viability of zoos in the context of smart cities.
---
Cases and Situation
Innovative smartening projects have been implemented in Japan to mitigate the negative impact of the COVID-19 pandemic on zoos. For instance, KDDI, a Japanese company, launched the "one zoo" online platform, which featured prominent zoos such as the Asahiyama Zoo and the Tennoji Zoo [11]. The platform allowed users to observe animals in real time and make donations to animal protection associations through membership purchases. Additionally, the platform rewarded users with zoo tickets or souvenirs. However, despite the developers' efforts to enhance the zoo tour experience, the project was discontinued on 31 May 2022 [11] due to a lack of online activity. The developers had not considered user feedback on each smartening project promptly and lacked objective analysis, leading to the project's failure.
Land 2023, 12, 1747 3 of 25 Another example is Tokyo Zoonet's online platform [12], Tokyo Zoovie, which comprises four members of the Tokyo Zoological Park Society (Tokyo Dobutsuen Kyokai): Ueno Zoological Gardens, Tama Zoological Park, Tokyo Sea Life Park, and Inokashira Park Zoo. The platform provides visitors with a guided tour of the four zoos using an animal map and 3D models, and VR tours are also available. In addition, Ueno Zoo is part of the Tokyo Metropolitan Park Association, and it offers smart functions in the Tokyo Parks Navi platform, such as the ability to collect stamps, look up tour routes, blogs, and automatic tour recommendations, making it very user-friendly.
The development of smart platforms for zoos, such as "one zoo" and Tokyo Zoonet, highlights the increasing utility of intellectualization in addressing the operational and financial difficulties these institutions face. However, it is crucial to objectively assess the practicality and effectiveness of these smart functions and determine whether there is actual demand from visitors for such features. To this end, this study aims to model and analyze these issues quantitatively, enabling zoo managers to make informed decisions regarding the zoo's development, identify potential cost savings, and gain insight into visitor needs and preferences compared to the wider market. By providing an objective and data-driven analysis of the efficacy of smart functions in zoos, this research will contribute to these vital institutions' sustainable development and success.
The current state of Japanese smart zoos is in a preliminary phase, necessitating a standardized and objective set of regulations to identify good and bad smart implementations. Nevertheless, at the current stage, most smart projects are focused on multimedia functions to enhance the visitor experience. There are relatively few projects centered on big data and ecological conservation. Thus, the judgment criterion will focus on visitor feedback rather than efficacy values. The data collection component of this study will take the form of a questionnaire, asking respondents to rate the importance and performance of each smart item on a scale from 1 to 5. AHP (analytical hierarchy process) weights will be calculated based on this questionnaire data, and FCEM (fuzzy comprehensive evaluation method) will be employed to obtain numerical results for the objective indicators of intellectualization. In addition, IPA (importance-performance analysis) will be utilized to evaluate each smart project, assess its current development status, and obtain opinions. Through this study, zoo managers can identify the appropriate direction for zoo development, achieve significant cost savings, determine visitor needs and preferences, and compare their zoo with the broader market.
As demonstrated in our previous research, we have already conducted a comprehensive examination of Ueno Zoo in Tokyo using the above-mentioned methodology [13]. Moreover, for the current investigation, our focus will shift to Kyoto Zoo in Kyoto City, a highly illustrative metropolis that has experienced a fiscal crisis in the past ten years [14], prompting an extensive effort to revitalize its economic landscape through a multifaceted smart city plan. Kyoto Zoo is an ideal site for our research because of this complex milieu. Furthermore, our investigation aims to explore the divergences between the intellectualization of zoos as a general practice and the unique challenges and opportunities that arise from zoo development within the comprehensive framework of a smart city.
---
Literature Review
The current discourse in Japan within the academic community has shifted toward embracing the notion of smart zoos. However, it is important to note that the term commonly used in Japan is "Intellectualization of zoos" (or D ōbutsuen no sumāto-ka), which is often regarded as an integral component of the broader smart parks concept.
"SMART PARK: A TOOLKIT", from the Luskin School of Public Affairs, UCLA [15], provides a comprehensive understanding of the concept of smart parks, laying out a framework for evaluating such parks based on their spatial characteristics from the perspective of designers, park managers, and advocates. While this model offers a satisfactory level of specificity in defining various program parameters, the missing objective data evaluation system remains a critical gap. Similarly, "Research on the Construction Framework of Smart Park: A Case Study of Intelligent Renovation of Beijing Haidian Park" offers a systematic approach to evaluating smart parks based on their functions [3]. However, the study does not include a comprehensive survey of tourists' emotions and objective data, limiting its applicability. To address this gap, the article "How smart is your tourist attraction? Measuring tourist preferences of smart tourism attractions via an FCEM-AHP and IPA approach" [16] adopts a pioneering approach to incorporate FCEM-AHP and IPA methods into analyzing the weighting of parks and tourism preferences. The study leverages a questionnaire to collect data and uses AHP to determine weight sets, while a fuzzy comprehensive evaluation approach is applied to derive the strengths and weaknesses of the park. Although the study model provides a comprehensive framework, it has several limitations, including the lack of clear project descriptions and illustrations in the questionnaire, resulting in limited understanding among interviewees. Additionally, many of the projects in the study require re-exploration due to changes over the past few years.
Research on smart parks has recently entered an initial stage, with the establishment of frameworks for evaluating spatial characteristics and functional aspects. However, a critical gap needs to be addressed regarding objective data evaluation systems and comprehensive surveys of tourist feedback data. Previous studies have made notable contributions by adopting innovative approaches like FCEM-AHP and IPA methods to analyze park weighting, tourism preferences, and strengths and weaknesses. These studies lay a solid foundation for further research on smart zoos, particularly focusing on the Kyoto Zoo within the context of Kyoto Smart City.
---
Research Purpose and Significance
In another prior investigation, "Impact of Intellectualization of a Zoo through a FCEM-AHP and IPA Approach", the study pursued a methodical evaluation of the intellectualization process of Ueno Zoo [13]. The outcome revealed that Ueno Zoo is still in the nascent stage of intellectualization, with several components requiring further development for visitors to have an immersive tourist experience. Therefore, there is a pressing need to enhance the intellectualization and user-friendliness of the Tokyo zoos to create a more comprehensive and satisfactory tourist experience. Previous studies have provided a solid foundation for further research on smart zoos, with particular attention to Kyoto Zoo, utilizing the FCEM and IPA methodologies. Moreover, employing the same analytical framework would facilitate the comparative analysis of the degree of smartness and development orientation between Kyoto Zoo and Ueno Zoo. By employing the FCEM and IPA methodologies, the present study aims to quantitatively evaluate the intellectualization of Kyoto Zoo and compare it with Ueno Zoo, utilizing a consistent analytical framework. The ultimate goal is to enhance the intellectualization and user-friendliness of zoos in Japan, providing a more comprehensive and satisfactory tourist experience.
---
Materials and Methods
---
Study Area
The selection of Kyoto Zoo as the study site was deliberate and based on several reasons. Firstly, it is the second-oldest zoo in Japan, after Ueno Zoo, and has a rich history and heritage. Secondly, Kyoto Zoo is a non-commercial entity that espouses humanistic values and promotes peace. In 1941, during the war, many animals at the zoo perished through a large-scale animal slaughter. Since 1942, the zoo has held memorial services almost every autumn to express gratitude and reinforce the importance of life [17]. As of June 2019, Kyoto Zoo is home to 570 animals of 123 species, comprising mammals, birds, reptiles, amphibians, and fish [18]. Hence, Kyoto Zoo is where visitors can appreciate animals, ponder their living conditions, experience life through animal interaction, and gain insights into human-nature relationships. It embodies a part of Kyoto's culture and revered traditions and underscores the importance of peaceful coexistence between animals and humans. Thirdly, although the zoo is not located in the city center, it is situated in the Okazaki Area of Kyoto, which is surrounded by popular tourist destinations such as the Kyoto City Kyocera Museum of Art, Okazaki Park, and the Heanjingu Shrine, thereby ensuring a steady flow of visitors and a conducive operating environment [19]. During the financial crisis faced by Kyoto City, Kyoto Zoo appealed for assistance via SNS platforms, seeking support from local shops and donations from the community to provide the animals a chance to survive [20]. Notably, a local pickle store donated radish roots and leaves not commonly consumed by humans to serve as animal food. This act served as an example of how intellectualization can contribute to regional collaboration and promote or influence certain sustainable development goals (SDGs).
---
Identifying Evaluation Items of the Smart Zoo System of Japan (SZSOJ)
The research process of this study is shown in Figure 1.
Land 2023, 12, x FOR PEER REVIEW 5 of 28
insights into human-nature relationships. It embodies a part of Kyoto's culture and revered traditions and underscores the importance of peaceful coexistence between animals and humans. Thirdly, although the zoo is not located in the city center, it is situated in the Okazaki Area of Kyoto, which is surrounded by popular tourist destinations such as the Kyoto City Kyocera Museum of Art, Okazaki Park, and the Heanjingu Shrine, thereby ensuring a steady flow of visitors and a conducive operating environment [19].
During the financial crisis faced by Kyoto City, Kyoto Zoo appealed for assistance via SNS platforms, seeking support from local shops and donations from the community to provide the animals a chance to survive [20]. Notably, a local pickle store donated radish roots and leaves not commonly consumed by humans to serve as animal food. This act served as an example of how intellectualization can contribute to regional collaboration and promote or influence certain sustainable development goals (SDGs).
---
Identifying Evaluation Items of the Smart Zoo System of Japan (SZSOJ)
The research process of this study is shown in Figure 1. The present study seeks to explore the unique application of the concept of intellectualization in Japanese zoos, which is closely intertwined with the urban lifestyle of Japan. To achieve this aim, we draw upon ongoing projects at Kyoto Zoo, which has been observed to have a wide range of QR codes, making it a noteworthy feature for our primary classification. Furthermore, the zoo's official ecological sustainability plan identifies the ecosystem as another primary classification item. Our survey revealed that the mobile application in Kyoto Zoo has been discontinued. As a result, we have identified four primary classification items: QR code information function, ecology system, functions within the zoo, and official website function. The 26 secondary classification items are derived from these four primary categories. A summary of the concept definitions of these items is presented in Table 1. The present study seeks to explore the unique application of the concept of intellectualization in Japanese zoos, which is closely intertwined with the urban lifestyle of Japan. To achieve this aim, we draw upon ongoing projects at Kyoto Zoo, which has been observed to have a wide range of QR codes, making it a noteworthy feature for our primary classification. Furthermore, the zoo's official ecological sustainability plan identifies the ecosystem as another primary classification item. Our survey revealed that the mobile application in Kyoto Zoo has been discontinued. As a result, we have identified four primary classification items: QR code information function, ecology system, functions within the zoo, and official website function. The 26 secondary classification items are derived from these four primary categories. A summary of the concept definitions of these items is presented in Table 1. QR codes for science videos can provide visitors with educational and entertaining content about animal behavior, ecology, and conservation, enhancing their understanding of the natural world.
---
9
Regional activities QR code information QR codes about regional activities can inform visitors about cultural and recreational activities in the zoo's surrounding area, encouraging them to explore the local community.
Animal education science live QR code information QR codes about live animal education events can provide visitors access to real-time animal behavior and conservation education, promoting a deeper understanding and appreciation of the zoo's mission.
Animal protection organization QR code information QR codes about animal protection organizations can inform visitors about partner organizations and their efforts to conserve and protect endangered species worldwide.
---
Ecology system Ecological cycle systems
Ecological cycle systems in the zoo can sustainably manage waste, recycle resources, and maintain a healthy environment for animals and plants.
---
Environmental sensors
Environmental sensors can monitor the zoo's temperature, humidity, air quality, and other environmental factors, providing data for environmental management and animal welfare. Automatic watering Automatic watering systems can provide plants with appropriate amounts of water, thereby reducing water waste and ensuring plant health in the zoo. Eco-energy (solar power)
Solar power can generate clean energy for the zoo, reducing its carbon footprint and promoting sustainable energy use.
---
Ecological energy use information
Information about ecological energy use in the zoo can educate visitors about the zoo's efforts to reduce energy consumption, promote renewable energy, and protect the environment.
---
Functions within the zoo
Free WIFI Free WIFI in the zoo can provide visitors access to online resources and enhance their overall experience.
---
Electronic ticketing system
Electronic ticketing systems can streamline ticket purchasing and reduce wait times for visitors, improving their overall experience in the zoo.
---
Interactive animal education
Interactive animal education can provide visitors with engaging and educational experiences, such as allowing them to interact with animals through devices or providing real-time feedback on animal behavior and health, promoting a deeper understanding and appreciation of the natural world.
---
Animal state observation
Animal state observation can monitor animal behavior and health, enabling the zoo to provide appropriate care and promote animal welfare.
---
Animal status detection (camera)
Animal status detection cameras can detect and monitor animal behavior and health, providing data for animal welfare management and research. Electronic information screen Electronic information screens can provide visitors with maps, schedules, and other relevant information about the zoo, enhancing their overall experience.
---
Official website function Smart souvenir vending (photos)
Smart souvenir vending machines can provide visitors with customized photo souvenirs, enhancing their zoo memories and promoting sustainable souvenir production.
---
Official website function
The zoo's official website provides visitors with comprehensive information about the zoo's animals, exhibits, events, and services.
---
Tourism SNS
The zoo uses social media platforms such as Facebook, Instagram, and Twitter to promote tourism activities and interact with visitors.
---
Digital map
The digital map of the zoo is accessible on mobile devices. It provides visitors real-time information about exhibits, events, and animal locations, facilitating navigation and enhancing the visitor experience.
---
Data Collection
This study gathered data from 117 highly qualified graduate students in landscape architecture enrolled at prestigious universities in Kyoto and Chiba. To ensure the veracity and credibility of the collected data, respondents were required to log in to their personal accounts before answering the Google questionnaire. Additionally, participants confirmed that they had experienced the Kyoto Zoo as a tourist, thus providing reliable insights into the smart zoo experience. Due to their academic backgrounds, the respondents could evaluate the smart zoo experience from a research-based perspective, while the completed tourist experience guaranteed the validity of the questionnaire. The questionnaire was designed with two levels of indicators (Level 1 and 2), and it included items that were assessed for their importance and performance on a scale of 1-5. The importance assessment scale ranged from 1 (not at all important) to 5 (very important), whereas the performance assessment scale ranged from 1 (very poor) to 5 (very good). The inclusion of graphical descriptions in each item aimed to prevent misidentification. The reliability of the questionnaire was also tested to ensure its quality. For the full list of questionnaire items, please refer to Supplementary File S1.
In order to derive meaningful insights from the collected data, the study utilized a two-stage process. The importance rankings obtained from the survey results were utilized as objective data references in the first stage. To determine the weightage of each item, the study applied the AHP. The AHP-derived weights were then used in the FCEM. This method integrates the fuzzy theory, a widely recognized method for decision making in complex situations, and the analytic hierarchy process to evaluate complex systems. The FCEM was utilized to obtain the zoo's current results for construction effectiveness.
In the second stage, the original 1-5 rating data obtained from the questionnaire were retained. The study employed IPA testing to assess the overall intellectualization construction degree and each specific item in the zoo. IPA is a widely used method for evaluating the performance of a system or product by examining the relationship between importance and performance. The results obtained from the IPA testing were then used to guide future zoo development, providing valuable insights that could be used to enhance the visitor experience and improve the zoo's overall effectiveness.
---
AHP (Analytic Hierarchy Process)
The analytic hierarchy process (AHP) is an essential tool for this study due to its rigorous and systematic approach to decision making. Developed by Thomas L. Saaty in the mid-1970s [21], the AHP combines qualitative and quantitative analyses to quantify group decisions and priorities. By breaking down complex problems into hierarchical structures and using pair-wise comparisons, the AHP determines the relative importance and weight of criteria and alternatives [22]. This allows decision makers to make wellinformed and transparent choices based on thorough analysis. Therefore, in our study, we adopted the AHP as a recognized method for systematically and hierarchically quantifying group decisions and weights. We used a pair-wise comparison of the weights of each item to assess the relative importance of different criteria within each item. To ensure the accuracy of the pair-wise comparison process, importance rankings were collected from the questionnaire, and the resulting data were transformed into percentages on a scale of 1-9. These percentages were then used to judge the relative importance of pair-wise comparisons among all items. The rankings of relative importance, as shown in Table 2, were obtained from this process. The vector U will also define each evaluated item set. The classification is defined as follows: The AHP method analyzes designated items based on their importance ranking and then constructs a judgment matrix. The maximum eigenvalue of the judgment matrix is calculated, and the resulting eigenvector is considered the evaluation weight vector A. However, a consistency test is performed to ensure the objectivity and rationality of the judgment. This is because the AHP method is prone to inconsistencies in the judgment matrix when respondents are asked to compare the importance of multiple criteria. Therefore, a consistency ratio (CR) is calculated to determine the degree of inconsistency in the judgment matrix.
U = {U 1 , U 2 , U 3 , U 4 } U 1 = {U
In the entirety of the computation, the deviation consistency index of the judgment matrix is represented by CI, which is calculated as CI = (λ-n) (n-1) . A higher value of CI indicates poor consistency of the judgment matrix, whereas a CI value of 0 represents the complete character of the matrix. The consistency ratio, denoted as CR, is calculated using the formula CR = CI RI , where RI represents the average random consistency index. When CR < 0.1, the consistency of the judgment matrix can be considered acceptable.
---
FCEM (Fuzzy Comprehensive Evaluation Method)
The fuzzy comprehensive evaluation method (FCEM) is needed for this study due to its ability to handle uncertainty and imprecise information. Based on the fuzzy set theory pioneered by Lotfi Zadeh [23], the FCEM allows for representing and manipulating fuzzy and uncertain data. With its application in various fields, FCEM enables the conversion of qualitative and uncertain assessments into quantitative measurements [24]. In this study, where perceptions of the concepts of "Smart" for visitors are inherently vague, FCEM is employed to analyze and evaluate the effectiveness of smart construction in the zoo. By utilizing FCEM, the study aims to provide a comprehensive assessment considering multiple factors and constraints.
The FCEM calculation process is carried out in two steps using MATLAB. The first step involves establishing the fuzzy judgment matrix. The degree of membership of the Item Set Rm can be defined as follows:
R m = R m1a R m1b • • • R m1e R m2a R m2b • • • R m2e . . . . . . . . . . . . R mna R mnb • • • R mne
The weighting of Item Set A of the first classification calculated by AHP can be defined as:
A = A 1 A 2 A 3 A 4
The weighting of Item Set Wm of the secondary classification calculated by AHP can be defined as:
W m = W m1 W m2 • • • W mn
As mentioned above, the symbol "m" signifies the primary classification category, while "n" denotes the number of sub-classification items. Moreover, the symbols "a-e" correspond to the five-point rating system, ranging from 1 to 5. By using this method, the degree of membership of the Item Set Rm can be established. The collected raw data from the questionnaire are then transformed into the "R" matrix, which is utilized to construct the fuzzy judgment matrix.
The second step is to use the established matrix for the fuzzy comprehensive evaluation calculation as follows:
C 1 = W 1 × R 1 C 2 = W 2 × R 2 C 3 = W 3 × R 3 C 4 = W 4 × R 4 B = A × C 1 C 2 C 3 C 4 = b 1 b 2 b 3 b 4 b 5
The term "bi" value refers to the degree of membership of the evaluated item to each evaluation criterion, which is determined based on the evaluation statement (e.g., "excellent", "good", "moderate", "fair", and "poor") corresponding to the ranking system. The "bi" value is obtained by performing the fuzzy calculation based on the degree of membership between the evaluation statement and the evaluated item. The highest value obtained from this calculation represents the intellectualization result of Kyoto Zoo, indicating the zoo's level of intelligence and smartness in terms of its facilities, exhibits, and services.
---
IPA
The importance-performance analysis (IPA) is a widely used method for evaluating customer satisfaction by measuring the gaps between customer expectations and actual perceptions [25]. Utilizing a four-quadrant diagram, this method can swiftly identify the areas requiring attention, prioritize each demand indicator, and formulate a sound implementation plan. The IPA method has proven to be an effective and straightforward approach for measuring customer satisfaction and improving the quality of service [26]. Its ease of use and practicality make it a valuable tool for businesses seeking to enhance customer satisfaction and stay ahead of the competition.
The mean value was computed for each item in the original questionnaire to perform IPA, and the resulting means for overall performance and importance were utilized as quadrant dividers. Figure 2 illustrates the chart that determines the position and stage of each item.
proach for measuring customer satisfaction and improving the quality of service [26]. Its ease of use and practicality make it a valuable tool for businesses seeking to enhance customer satisfaction and stay ahead of the competition.
The mean value was computed for each item in the original questionnaire to perform IPA, and the resulting means for overall performance and importance were utilized as quadrant dividers. Figure 2 illustrates the chart that determines the position and stage of each item.
---
Results
---
Results of the AHP
The questionnaires demonstrated excellent recovery rates, and their reliability was assessed with values above 0.9. Additionally, validity was tested using the Kaiser-Meyer-Olkin measure of sampling adequacy with values greater than 0.5 and significant values less than 0.05. The detailed results are presented in Tables 3 and4.
---
Results
---
Results of the AHP
The questionnaires demonstrated excellent recovery rates, and their reliability was assessed with values above 0.9. Additionally, validity was tested using the Kaiser-Meyer-Olkin measure of sampling adequacy with values greater than 0.5 and significant values less than 0.05. The detailed results are presented in Tables 3 and4. The relative importance of the questionnaire and the corresponding factors are presented in Table 5, with the AHP scores ranging from 1 to 9, reflecting the pair-wise comparisons. The AHP scores were derived from the participants' relative judgments percentage and indicated the priority and significance of each item in the evaluation process. The AHP method involves a systematic and pair-wise comparison of all items based on their relative importance, leading to a judgment matrix for each evaluation factor. As presented in Table 6, the judgment matrix for the first-level evaluation factors of SZ-SOJ has been established using the AHP method. Moreover, Tables 7-10 display the judgment matrices for the second-level evaluation factors. The consistency of all matrices has been evaluated, and the results indicate the accuracy and validity of the AHP analysis in this study. The consistency test was performed on all the results, which showed that the weight set obtained through AHP is valid and reasonable.
The AHP analysis yielded varying weight values for each item, highlighting differences in their relative importance. For instance, U 3 (Functions within the zoo) in the firstlevel catalog had a weight value of 3.819%. In comparison, U 11 (Plants' QR code information) and U 18 (Animal education science videos QR code information) had a 6.917% weighting in the second-level catalog. In contrast, U 14 (Questionnaire research QR code information) had a weight value of only 1.162%. Similarly, U 19 (Regional activities' QR code information) had a 1.814% weighting, and U 110 (Animal education science live QR code information) and U 111 (Animal protection organization QR code information) had a combined weight of 3.349%. The weight of U 22 (Environmental sensors) was 3.355%, and that of U 25 (Ecological energy use information) was 7.183%. On the other hand, U 31 (Free WIFI) and U 34 (Animal state observation) had weights of 5.38%, while U 37 (Smart souvenir vending (photos)) had a weight of 2.15%, and U 42 (Tourism SNS) had a weight of 5.49%. Interestingly, these weights were lower than expected, suggesting that visitors or citizens may not necessarily share the same expectations as researchers or designers regarding the envisioned smart features.
---
Results of FCEM
The exact values for each second-level evaluation factor of the questionnaire can be found in Tables 11121314. Based on the membership degree of the Item Set R m , the following can be constructed:
R 1 = 0.
R 2 = 0.
R 3 =
0.32 0.29 0.15 0.14 0.09 0.32 0.30 0.20 0.13 0.05 0.28 0.21 0.27 0.13 0.11 0.32 0.26 0.21 0.12 0.09 0.29 0.25 0.20 0.13 0.14 0.34 0.22 0.25 0.12 0.07 0.20 0.33 0.23 0.10 0.14
R 4 =
0.31 0.33 0.14 0.11 0.11 0.24 0.28 0.21 0.17 0.10 0.32 0.22 0.21 0.15 0.10 Afterwards, the first-level fuzzy comprehensive evaluation result can be obtained by using the assessment matrix C and the corresponding weight vector A, as B = A × C.
C 1 = W 1 × R 1 C 2 = W 2 × R 2 C 3 = W 3 × R 3 C 4 = W 4 × R 4 B = A × C 1 C 2 C 3 C 4 B = 0.2694 0.2682 0.2034 0.1317 0.1298
The fuzzy comprehensive evaluation approach is commonly based on the maximummembership degree principle to determine the results. Upon analysis of vector B, it is apparent that the membership-degree values corresponding to the ranking system's categories of "excellent", "good", "moderate", "fair", and "poor" are 0.2694, 0.2682, 0.2034, 0.1317, and 0.1298, respectively. Notably, the highest membership degree value of 0.2694 is attributed to the "excellent" category. Therefore, the SZSOJ evaluation score for Kyoto Zoo is calculated to be 0.2694, which reflects an "excellent" rating. This finding indicates that the intellectualization construction efforts of Kyoto Zoo are commendable, resulting in high levels of visitor satisfaction and agreement with the zoo's intellectualization initiatives.
---
Results of IPA
The arithmetic mean of all factor scores can be calculated using SPSS 21.0 software on the unprocessed data collected from the questionnaire, as tabulated in Tables 15 and16. The generated IPA matrices are graphically depicted in Figures 2 and3, which enable us to visually identify the key areas of concern and prioritize the corresponding demands. The IPA results present a stark contrast to the findings from the questionnaire, as illustrated in Figure 3. Notably, Functions within the zoo (categorized under the first quadrant) exhibited a significantly higher score than the mean values in both importance and expressiveness, thus emphasizing the need for its continuous sustenance. Similarly, the Official website function (also belonging to the first quadrant) scored higher than mean values in both importance and expressiveness, marking its significance. However, The IPA results present a stark contrast to the findings from the questionnaire, as illustrated in Figure 3. Notably, Functions within the zoo (categorized under the first quadrant) exhibited a significantly higher score than the mean values in both importance and expressiveness, thus emphasizing the need for its continuous sustenance. Similarly, the Official website function (also belonging to the first quadrant) scored higher than mean values in both importance and expressiveness, marking its significance. However, the QR code information function and Ecology system, both falling under the fourth quadrant, received below-average scores on both parameters, indicating their lower priority in the development program. Nevertheless, with sustained investment, these functions could be improved, and their recognition and value to visitors enhanced.
In summary, Functions within the zoo is the preeminent and efficacious aspect. In contrast, the QR code information function and Ecology system require additional investment to increase visitors' acknowledgment of their worth. The findings of this study underscore the need for continual refinement and enhancement of the smart features of the SZSOJ to sustain and elevate visitor satisfaction and engagement. As such, the integration of usercentered design principles and feedback mechanisms should be prioritized in developing and implementing smart features in zoo environments. By doing so, the SZSOJ can reinforce its position as a cutting-edge smart zoo and provide visitors with an exceptional and memorable experience. Figure 4 provides clear evidence that the Animal status detection (camera) function is highly valued by visitors and, therefore, should be prioritized for continued devel-opment and maintenance. However, the Electronic information screen, Ecological cycle systems, Animal education science live QR code information, Animal protection organization QR code information, and Artwork QR code information are less highly valued by visitors. They should therefore be given lower priority in future development efforts. Conversely, visitors have expressed an interest in Plant QR code information, indicating its potential as a feature that could be further developed. Overall, the majority of the features fall in or around the center of the graph, with some outliers in the fourth quadrant, suggesting the need for consistent development and maintenance efforts.
---
Results on Satisfaction of Zoo Visitors
We harnessed the study to distill a singular gauge of zoo visitor contentment, reflecting their perceptions within the current context. Each score was multiplied by the corresponding item's satisfaction proportion, culminating in an averaged overall value harmonized with the weights of each first-level categorization, yielding a final satisfaction rating out of 5. The synthesis of different factors yielded a weighted mean satisfaction score in Kyoto Zoo of 3.43 (compared to Ueno Zoo's 2.70), affirming visitors' positive sentiments. Generally, a score greater than 3 indicates good satisfaction. This consolidated metric, aligned with scholarly practices, encapsulates smart features, sustainability, and visitorcentric amenities, reflecting the holistic zoo experience. This approach underscores methodological rigor, resonating with academic discourse, and deepens our understanding of smart zoos' impact on visitor satisfaction dynamics.
---
Discussion
---
Results on Satisfaction of Zoo Visitors
We harnessed the study to distill a singular gauge of zoo visitor contentment, reflecting their perceptions within the current context. Each score was multiplied by the corresponding item's satisfaction proportion, culminating in an averaged overall value harmonized with the weights of each first-level categorization, yielding a final satisfaction rating out of 5. The synthesis of different factors yielded a weighted mean satisfaction score in Kyoto Zoo of 3.43 (compared to Ueno Zoo's 2.70), affirming visitors' positive sentiments. Generally, a score greater than 3 indicates good satisfaction. This consolidated metric, aligned with scholarly practices, encapsulates smart features, sustainability, and visitorcentric amenities, reflecting the holistic zoo experience. This approach underscores methodological rigor, resonating with academic discourse, and deepens our understanding of smart zoos' impact on visitor satisfaction dynamics.
---
Discussion
---
Findings from the Questionnaire
The findings from the questionnaire survey conducted at Kyoto Zoo have yielded insightful results, with most items scoring similarly and possessing little disparity in terms of importance and expressiveness. However, some unexpected revelations emerged, such as the Functions within the zoo being ranked the least important among the four items in the first level of classification, exhibiting a significant value gap. In contrast, the QR code information function was surprisingly rated as the most important. Moreover, the questionnaire collection process and results differed from those of Ueno Zoo, and the following specific observations were identified:
1.
Firstly, there was a marked difference between Kyoto Zoo and Ueno Zoo in terms of questionnaire awareness. The feedback from the questionnaire about Ueno Zoo revealed that many respondents needed to be made aware of the existence of some smart functions in the park if there were no accompanying photos. In contrast, clarity was sufficient for the completion of questionnaires at Kyoto Zoo, indicating a more thorough understanding of these smart functions among citizens. This may be attributed to the fact that the good promotion of smart features in Kyoto City's smart city project has fostered widespread acceptance and comprehension of smart functions among the populace [27], unlike in Ueno Zoo, where the importance and performance of many projects exhibit significant disparities.
---
2.
Secondly, the present study examined and compared the feedback received from visitors at Kyoto and Ueno Zoos regarding the importance and performance of various smart functions. Interestingly, the results showed that there was a significant difference between the two zoos in the importance of Functions within the zoo. While this function was ranked the least important among the four items in the first level of classification in Kyoto Zoo, it was surprisingly ranked the most important function by respondents in the Ueno Zoo questionnaire. This may be due to the differing scale and positioning of the two zoos. Ueno Zoo, being a zoo with a large flow of people in the city center and many foreign visitors, may have visitors who pay more attention to offline interactive functions without the use of devices. In contrast, Kyoto Zoo, being a regional city zoo welcoming mostly resident visitors, may have visitors who expect newer and more innovative intelligent functions. Additionally, the respondents at Kyoto Zoo may have perceived Functions within the zoo as a basic feature that does not require much attention or specialness, as its project performance is similar to that of the city streets outside the park (e.g., the free Wi-Fi function at Kyoto Zoo uses the city Wi-Fi of Kyoto City). However, visitors to both zoos were found to value Official website functions highly, with visitors showing a strong demand for information about official releases. Moreover, the regional service nature of Kyoto Zoo may have contributed to the need for regional communication functions such as the QR code information function. These findings shed light on the different factors that may influence visitor perceptions and expectations of smart functions in zoos and highlight the need for zoos to carefully consider their unique visitor profiles when designing and implementing smart features.
---
3.
Finally, we propose that the promotion of smart city projects in Kyoto City and the financial crisis of the past few years have raised awareness and expectations of smart cities, which may lead to higher average feedback scores on the importance scale in the future.
---
Findings from Analytical Calculations
The results of the FCEM analysis demonstrate that the intellectualization infrastructure of Kyoto Zoo is deemed "excellent" (with an FCEM evaluation score of 0.2694). This finding suggests that citizens can easily comprehend and appreciate the intellectualization features of the zoo. Although unexpected, this is a very positive outcome, as it indicates that Kyoto Zoo can effectively realize the intellectualization process within the Smart City framework, making it more accessible and integrated into citizens' daily lives. Furthermore, in contrast to the FCEM result of Ueno Zoo, which received a "fair" score, the importance of the smart city background and system is more prominently manifested in the smart zoo concept [13]. This is due to the smaller scale of Kyoto Zoo and its amiable service style. Therefore, the public may prioritize practical features that have frequent daily uses over those that appear technologically advanced, akin to the higher happiness satisfaction reported in small towns compared to big cities.
The IPA analysis yielded results that differ significantly from the numerical importance ratings obtained from the questionnaire in the first classification level. We posit the following explanations:
1.
The distinction arises from the questionnaire design, where importance is assessed solely at the first level of categorization. The respondents' direct voting on these firstlevel categories determines their importance, hinging on their judgment of the overarching functional categorization. In contrast, IPA generates an average value by incorporating all respondents' responses to second-level categorization items in the calculation. This approach is more specific and depends on each functional category's sub-item performance. The questionnaire's importance value directly stems from tallying first-level categorical items, while IPA calculates the mean of its second-level categorical items.
---
2.
Overall satisfaction (derived from direct scoring of first-level categorical items in the questionnaire) may vary based on visitors' perceptions. For instance, the QR code information function, primarily focused on digital interaction, might prompt visitors to anticipate a comprehensive zoo intelligence. Conversely, "Functions within the zoo" is a broader category found in various Japanese zoos, making it challenging to associate directly with overall intelligence satisfaction. IPA's mean value for second-level category items differs in this aspect. Some first-level category items may exhibit relatively lower overall satisfaction scores but have sub-categories (e.g., "Animal Status Detection (camera)" within "Functions within the zoo") that garner high satisfaction. Consequently, these items receive higher values in IPA's mean value calculation.
---
3.
The quantity of sub-items varies across each Level 1 categorical item. For instance, the first category, "QR code information function", encompasses 11 sub-items, whereas the fourth, "Official website function", includes only 3. This disparity in sub-item count could influence visitors' perceptions and expectations. The QR code information function, featuring numerous sub-items, might overwhelm visitors with its multitude of functions, possibly eliciting feelings of fatigue or numbness. Indeed, our subjective interviews revealed inquiries like, "Why doesn't the zoo consolidate all these functions into one platform?"
The QR code information function and the Ecology system require further development and refinement to increase public and visitor awareness of their significance in driving the park's sustainable growth.
The current strengths, weakness, opportunities, and threats of Kyoto Zoo are summarized in the SWOT chart in Figure 5.
---
Comparison and Recommendations for Ueno Zoo Based on the Impact of Kyoto Smart City
Regarding system classification, both Kyoto Zoo and Ueno Zoo are classified as shown in Figure 6.
First, we will examine both zoos in a combined weighted order. We ranked the weights of the smart items of Kyoto Zoo (including the first-quarter classification and the secondlevel classification) obtained from Tables 6-10 and compared them with the items from Ueno Zoo. The results of the weights of the first-level classified items and the second-level classified items for each ranking are shown in Figures 7 and8.
Figures 7 and8 show that the item weights in Kyoto Zoo exhibit a higher degree of differentiation than those in Ueno Zoo. The range between the maximum and minimum values in Kyoto Zoo is more pronounced. Furthermore, Figure 8 highlights that approxi-Land 2023, 12, 1747 20 of 25 mately 20 sub-items in Kyoto Zoo have weights below 20%, with 6 sub-items falling below 5%. Furthermore, there is a substantial disparity in the weights of the top four sub-items in Kyoto Zoo. In contrast, Ueno Zoo displays a relatively uniform distribution of item weights in the second classification level, resulting in a more balanced overall distribution. Interestingly, even the weights of the first three items in Ueno Zoo are identical.
The magnitude of weighting also mirrors the visitors' level of expectation. In the case of Kyoto Zoo and Ueno Zoo, some of the programs with a high weighting (which, on the other hand, is interpreted as programs that visitors strongly anticipate) did not perform well and, therefore, did not end up in Quadrant 1 or even Quadrant 4 of the IPA results, which indicates that the programs developed by the zoos sometimes do not correspond to the actual needs of the visitors.
Secondly, we need to compare the two zoos' respective performances at the current stage. Concerning the overall FCEM results (Kyoto: excellent, Ueno: fair), Kyoto Zoo aligns better with visitors' perceived needs for smart features. Additionally, in terms of the single satisfaction value (Kyoto: 3.43, Ueno: 2.70), Kyoto Zoo outperforms Ueno Zoo. In other words, based on the current state of development, Kyoto Zoo's smart projects are better suited to the needs of local tourists and the collaborative development required for a zoo. Despite Ueno Zoo having more construction funds and a larger scale, visitor feedback on its current performance prompts considerations about whether more advanced intellectualization is always better, or whether finding smart projects suitable for the public represents a more favorable development concept.
Thirdly, we need to compare them regarding the overall project categorization framework. As no unified smart management platform exists, Kyoto Zoo cannot be classified using the same criteria as Ueno Zoo at the first level. However, it can be classified based on the direction of functional development. Currently, Kyoto Zoo has fewer first-level classifications due to the lack of smartphone applications. However, it has a strong QR code information function, classified as a first-level item. The Ecology system is also a primary development direction at Kyoto Zoo and a first-level item. First, we will examine both zoos in a combined weighted order. We ranked the weights of the smart items of Kyoto Zoo (including the first-quarter classification and the second-level classification) obtained from Tables 6-10 and compared them with the items from Ueno Zoo. The results of the weights of the first-level classified items and the secondlevel classified items for each ranking are shown in Figures 7 and8. On the other hand, the Official website functions and Functions within the zoo have fewer secondary classification sub-projects. Although Kyoto Zoo scored well in the overall IPA score, considering the limited classification coverage, we suggest that Kyoto Zoo should increase its coverage in future planning. Therefore, we recommend that Kyoto Zoo increase its classification coverage for better development.
Although Kyoto Zoo's current intellectualization only focuses on its ecology system, the fact that it is already part of the smart city development plan and has proposed regional smart equipment is an encouraging sign. It is also promising that the city's pre-existing smart facilities, such as the smart traffic system, can be integrated with the zoo's intellectualization. With the ongoing development of the city's smart infrastructure, including the use of big data, human flow monitoring data, smart streetlights, and AI cameras, Kyoto Zoo has the potential to significantly enhance its smart capabilities. We strongly recommend that Kyoto Zoo take these opportunities into consideration when developing its future smart plans and categories. Doing so will allow the zoo to fully leverage its position within the smart city and take its intellectualization process to the next level.
---
Conclusions
The primary objective of this study is to ascertain the level of intellectualization in Japanese zoos by utilizing the FCEM analysis method while determining weights using the AHP. Additionally, this study aims to identify the current strengths and weaknesses of smart function developments in zoos through IPA and explore the prospects of such developments. At the same time, we compared Kyoto Zoo with Ueno Zoo to see the difference in intellectualization achievements in different contexts in terms of data and systems. Furthermore, this study aims to investigate the differences between Kyoto Zoo under the smart city system and a conventional smart zoo. As the concept of smart zoos is relatively novel, particularly in Japan, where smart cities are still in their developmental stages, we seek to refine objective system research methods to assess the intellectualization process more objectively, ultimately aiding zoos in Japan and around the world to become smarter. Our study results can be compared with current policies and be used to guide future developments in the field.
However, it is important to note that there are some limitations that can inform future research. Firstly, the selection of the smart project was influenced by certain characteristics unique to Kyoto Zoo, such as its difference in service orientation and smart project offerings, which made it difficult to compare with Ueno Zoo using the same criteria. Instead, we had to rely on feedback from service recipients to analyze questionnaire responses. We plan to conduct a comparative study once a unified standard for smart zoos is established in Japan again. Secondly, due to geographical constraints, Kyoto Zoo's lack of a cell phone applications and a smart platform for unified management may have limited public perception of smart functions. These limitations highlight the need for more comprehensive and standardized evaluations of smart zoos in the future.
In addition, future studies can explore more advanced and innovative smart functions in zoos, including advanced technologies like AI, the IoT, and big data analysis [28]. Moreover, as the concept of a smart city continues to evolve, it will be important to compare the development of smart zoos with other traditional parks in the city to better understand the impact of smart technology on the overall tourism industry. This can be achieved through AHP for decision making and can expand the scope of smart research beyond individual zoo analysis.
---
Data Availability Statement: Not applicable.
---
Conflicts of Interest:
The authors declare no conflict of interest.
---
Supplementary Materials:
The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/land12091747/s1, Supplementary File S1: The following is the supplementary data related to this article. | 56,714 | 1,449 |
5ef93f866b103994c7856be6301b9cc664ea570b | Conceptual Approaches of Prospective Pedagogy | 2,021 | [
"JournalArticle"
] | The theoretical and methodological foundation of Prospective Pedagogy has become one of the key issues of contemporary pedagogy, when exploring various aspects of the science of education related to personality formation, to deal requiments for present and future. The analysis perspectives of the prospective education presented the degree of complexity in terms of the characteristics that were inserted in the research, as conditions of its conceptualization. Studies of general epistemology state four main conditions that a certain field of knowledge must meet in order to acquire the status of science: to have its own research object, to develop a conceptual and explanatory system; to have its own methods and techniques; investigation of the object of study reunited in a scientific methodology; and, to have a praxiology of the field, in the sense of influencing, directing, and controlling the phenomena it studies. Thus, it confirms the theoretical conceptualization of the paradigm of prospective education, as science of education. | Introduction
As it relates to the literature "over the past few decades, education systems, especially in higher education, have been redefined. Such reforms inevitably require reconsideration of operational notions and definitions of quality, along with a number of related concepts. This reconsideration aligns with the core of higher education reforms: improving efficiency and compatibility with emerging social demands while adapting to competitiveness and accountability trends". [40] Thus, restructuring the university education system represents a convenient objective for the development of the Republic of Moldova. The strategic directions of monitoring and development of the university education, described in the policies of sustainable development, have to be elaborated in relation to the worldwide development trends of society.
Researchers Gormaz-Lobos, C. Galarce-Miranda, H. Hortsch and C. Vargas-Almonacid are of the same opinion when referring to "the new demands of the society and the economy, the constant specializations of the scientific fields, and the incorporation of new technologies for teaching and learning make that the typical contemporary forms for the teacher academic training must be reviewed and analyzed". [19] Although the Education development strategy for the years 2021-2030 "Education-2030" of the Republic of Moldova [35] claimed a prospective, systemic, formative and dynamic education, centered on general-human and national values, and the prospective aspect remains less upgraded in the educational standards, manuals and curriculum of the university education system etc.
However, as V. Popa [29; 30] sustains in the Report on specific objectives of the education and training system (Brussels, 2001), the representatives of the European Council started from the hypothesis that the society assigns to the education different points of centering, since what particularizes our times is not the existence of change, but its superaccelerated rhythms. Thus, it emphasizes the need to substantiate theoretically and methodologically a new field -Prospective Pedagogy (PP).
Upgrading the educational process in the light of PP requires a responsible analysis, as the future creates increasingly higher requirements. These requirements need changes depending on PP trends, which will substantiate the elaboration of the new educational policies and the university education system.
Thence, the scientific approach for a possible theoretical and methodological substantiation of the PP became one of the key matters of the modern pedagogy. The need to explore this field in the present is dependent on several factors:
1. the accelerated rhythm of the change, the globalization, the challenges of the 21st century, the innovations and the creativity, the internationalization of the university education; 2. the need to ensure the quality and the performance of human resources at the global, national and local level; 3. the lack of a sustainable policy at the state level in the field of PP; 4. the shortage of prospective investigations in relation to the education; 5. the weak information level of specialists in the field of education as the report between the demand of labor market, society and the university offer; 6. the skills of the specialist needed on the labor market.
Simultaneously, when different aspects of education science are explored, the Prospective Pedagogy as fundamental field is not researched in details, is not conceptualized. This situation is seen as a dilemma or a shortcoming of the education sciences. The emphasized prospective character in education confirms its importance in training the personality to integrate into society and the labor market.
In this sense, M Stanciu [34] considers that the young people have to be prepared prospectively. The education will have to give to the individual that "interior compass" orienting them better in the future.
The purpose of the research resides in the theoretical and praxeological substantiation of the prospective education paradigm within the university in order to develop and to anticipate the educational process of the Republic of Moldova and to establish the level of planning the prospective skill in the university curriculum to train prospectively and professionally the emerging specialists, directed toward values of sustainable/prospective development of the education institutions/society.
In order to achieve the purpose, several objectives of the research are outlined:
1. Analyzing in multi-aspectual way the epistemological fundamentals of the prospective education under the conditions of the continuous educational reforms in the permanent education. 2. Conceptualizing the Prospective Education as a new paradigm in founding the Prospective Pedagogy (PP); 3. Elaborating and validating by experiment the paradigm of the Prospective Education.
---
Materials and Methods
Our study implements theoretical analysis of the Education Cod focused on information analytics of advanced higher education institutions of the Republic of Moldova [7], as well as two major documents in the legislative system of Moldova concerning Moldova Higher Education Reform project and decision of the Government of the Republic of Moldova dated of 28.06.2017 Nr. 482 "Nomenclature of Domains training and specialties in higher education" [26].
The research methodology meets the object, the purpose and the referred sources and constituted of: a) theoretical methods: scientific documentation, theoretical synthesis, deduction, generalization and systematization, comparison, transfer of theories; b) experimental methods: pedagogical experiment including: direct observation, testing, questioning, conversation; c) statistic methods: data collection, mathematical statistics.
To realize the multi-aspectual analysis of the prospective education's epistemological fundamentals, the specialty literature was analyzed to identify the place of PP in the system of education sciences.
---
3
Prospective pedagogy in the system of education sciences
Following the analysis of the specialty literature, we observed that the Prospective Pedagogy localized differently in the system of the education sciences based on certain criteria. As argument, we propose a table showing the fact that the Prospective Pedagogy frames as fundamental theoretical field (see Table 1).
Framing the Prospective Pedagogy in the fundamental field of the System of Education Sciences may be observed also at I. Bontaş, I. Jinga şi E. Istrate, [20], but these authors quote also Şt. Bărsănescu. O. Dandara [12] has the same opinion and she framed PP in fundamental sciences approaching it analytically in temporal context in the course "Pedagogy", appeared in 2010 and describing the classification of education sciences adapted according to E. Macavei [25]. Therefore, we ascertain that a good part of the authors of the Romanian area do not have research in the prospective field. We have to remark that the Model of M. Gatson [15] proposes as sciences those referring to the reflexive and prospective analysis of the future: education philosophy, education planning, and at C. Birzea [6] we find the education planning at the criterion: predominant research methodology.
PP approached as fundamental field of education sciences permits to conclude that it has as object of study the prospective educationa new paradigm.
There are many interpretations of the "paradigm" notion. In the science philosophy, G. Bergman introduced the term, but Thomas Kuhn contributed essentially to upgrading scientifically the term. The last defines the paradigm as a set of ideas, believes shared by the scientific community, based on prioritized scientific "realization" defining the researched and solved matterstaken from the practice of normal science." [22] As current aspect, sustains Dm. Patrascu, the paradigm circumscribes on what supports the new researches in the science philosophy. Paradigm -1) it is an initial conceptual schema, the model of stating a matter and its settlement, the research methods, dominated during a historical period; 2) theory (model, type of stating the matter) accepted as example in solving the research tasks. We understood by the pedagogical paradigm (from Greekmodel, ensample, learning) the general picture of education, as model for pedagogical action [27].
The most accepted interpretation of the "paradigm" term in our research is that of "model", aspiring to reproduce the essential elements of original, natural or socially studied phenomena and processes.
---
Theoretical basis for the elaboration and the development of the prospective pedagogy
The pedagogy is a theoretical-praxiological science, where the knowledge and the action, the explanation and the application, the theory and the practice, are inseparable sides of the perspective, from which it assumes the education as own object. This distinctive note of the pedagogy is emphasized by several authors. In this sense, R.
Hubert mentions that the pedagogy is a practical science and thinking; E. Planchard considers that we differentiate the descriptive plan (of the knowledge) and the normative plan (of the action) in the pedagogy; J.S. Bruner, referring to the theory of training, shows that it was not only descriptive and explicative, but also prescriptive and normative, [apud 28] and D. Todoran highlighted that "the education science, the pedagogy tends to discover the laws intervening to develop the educational phenomena in order to control, manage and plan gradually [36] The theoretical basis constitutes of laws, theories, principles, conceptions, ideas from the field of pedagogy, philosophy, psychology, education philosophy, pedagogical anthropology, ethics and education sociology etc. The research relied on ideas, philosophical-anthropological concepts regarding the prospective education: the matters of selecting the science content for the organization of the education objects, the experience philosophy and the pragmatic instrumentalism [10], the social-economic matters of masses [16], the approaches of the report between education and society [13], the theories relating to the decision-makingthe model of expectations [39].
A peculiar role in this sense ascribes to the theories of preparing the human resources for the future [10], [37], [11], [36], the prospective triangle [18], the theories of the change [8], the theory of perspective [21], the theory of expectations [39], and others.
We took into consideration, in the delimiting of Prospective Pedagogy in the education process, the theory of Romanian scientist D. Todoran describing prospective education (PE) as one of the sectorial dimensions, together with the economic, technological, political, cultural and social one [36]. The interdependence of these dimensions becomes an incontestable reality creating a climate of uncertainty and determines the research of global approaches of matters.
Starting from the idea that "Prospective Pedagogy researches the education from the prospective of the future" [1] and that the prospective education is namely the type targeting explicitly the transformation and the future, we get to the core of the same antinomy shaking the education field: tradition or modernity, adaptation for mention [2] or innovation for overcoming.
---
Functions of prospective education
We deducted as applicability the functions exercised by prospective education: of anticipation, innovative, of adaptation, of planning, of integration in social life, of orientation.
All these functions have to be seen in unity and interdependence. They represent, in the same time, a report reflecting the specific reality it meets, following its permanent adaptation to the requirements of global social system and to its main perspective subsystems.
The analysis perspectives of PE presented the degree of complexity under the aspect of characteristics that were inserted in the research as conditions of conceptualizing PE. Therefore, the detailed analysis of scientists acceptations in the field of education sciences permitted to establish the specificity of PE: it supposes a holistic, transdisciplinary, probabilistic approach, it has a cognitive structure based on the predic-tion of an event or phenomenon of education, it represents a type of emerging knowledge, an anticipative, dynamic, participative, operational, euristic and innovative character [23].
---
Fig. 1. Functions of prospective education
---
Conditions of Prospective Pedagogy to acquire the status of science
The study of general epistemology enunciates four main conditions we have to accomplish for a certain field of knowledge to acquire the status of science (Pedagogy Fundamentals):
1. To have its own research object. The Prospective Pedagogy is a field of education sciences plainly justified, which has as its own research objectthe prospective education, it being not only a real complex field, but also of maximum importance claiming an appropriate scientific and rational approach.
We identify in the specialty literature [2], [11] different approaches related to PE. We accepted as starting point in our research the definition of R. Dottrens characterizing PE by the orientation toward the future, having as object of study the probabilities of collective evolution in order to establish the fundamentals of an education adapted to the situations and the requirements of tomorrow. [11] P. Apostol [2] considers that PE refers to a global, peculiar systemic function of social complexes, the one of "production/reproduction", more exactly of forming types of personality appropriate to a society in a determined stage of its history.
M. Stanciu asserts that PE is a methodical investigation of the future by using an approach favoring the changes and the renewals. In contrast to the futurology, the prospective tries to avoid the rupture between the past and the future [34].
At the core, the PE conceptual analysis supposes the clear delimitation of invoked term's functionality. We consider that PE as the study object of PP, may be interpreted from several perspectives:
---
Orientative
• as field of science, addressed to the study of factors, mechanisms of prospective construction, development of prospective and competent personality for the present and the future etc.; • as dimension of the education system and other systems;
• as continuous process of gathering and forming values, prospective personality;
• as study discipline of the educational process aimed at forming prospective skills;
• as compound/method of integrating and applying in other disciplines;
• as infusional element in the area of different disciplines etc. [23] 2. To elaborate a conceptual and own explicative system, compiled of concepts, judgments and laws (utterances), capable to overcome the descriptive phase and to allow the access to explanation and prediction. PP in the Republic of Moldova is currently at the phase of constituting a conceptual and own explicative system needing the development of exact and stable pedagogical language under the report of meanings. Although the degree of conceptualizing does not achieve the rigor and the precisions of other sciences, the scientific status of concepts and prospective pedagogical utterances cannot be questioned, as it is validated by the formal and non-formal educational practice, and was emphasized some concepts (prospective, prospective character, prospective education, prospective pedagogy etc.) in the specialty literature [5], [9], [10], [11], [36] etc. The conceptual analysis of proposed terms needs to delimit the notions and the respective contents. We find in the use of more terms in the specialty literature: prospective pedagogy, prospective education etc., each having, as we mentioned, a common spectrum of matters and a specific analysis of the educational field.
This perspective leads toward the analysis of different approaches of PE. Due to these considerations, it relies on three existing concepts: prospective education [36], prospective pedagogy [25] and the prospective character of education [32]. Although the delimitation of these concepts was approached partially, a comparative analysis of them was not realized at theoretical-practical level.
In our opinion, the prospective education is an anticipative activity, being orientated, by outcomes, toward future. We join the opinion of the scientist V. M. Cojocariu [8], mentioning that under the current society's conditions, accelerated evolution rhythm, it is necessary to impel the training a personality capable to settle matters of life and activity.
As for the term the prospective education, D. Todoran [36] defines it as training the individual for future and in the future. The author sustains also that the prospective education, largely, covers any research and futurist construction, and narrowly, refers to researches and studies on the possible future in this field.
The expression Prospective Pedagogy was introduced in the pedagogical language for the first time by G. Berger, who suggested the idea of a new direction, but he did not develop it. His ideas were taken by the promoters of the permanent education [5].
The analyzed definitions clarify the difference education them, where the prospective pedagogy aims at orienting the education toward the requirements of the future, and the prospective education represents a systemic study of educational systems models, of future processes and education systems, highlighting the conceptions of future education.
Significant in our research is the prospective character of education imposing a double conditioning: the appropriate reporting to the characteristics of future society and the functioning of present society. Due to its prospective character, the education not only adapts to the specificity of anticipated changes, but also prepares the conditions resulting in these changes and it models by current actions the specificity itself of the future society. S. Cristea [9] reveals that one of internal characteristics of education policy is the prospective character emphasizing that the education activity always aims at a future, strategic, current and conjectural situation.
The absence of the professional and scientific terminology of the "prospective" category, which would meet the correlative notion of "perspective", "proactive", leads to the terminological shortage.
We want to specify that the need to substantiate the new notions in the pedagogical science is due to several factors:
1. the significant increase of the importance of prospective character of the prospective education and dimension in training and evaluating the processes of social and personal development along the way; 2. the need to institute within the education sciences the field that would substantiate the correlation of educational actions from the pastpresentfuture, reported to the present and future needs of the individual personality and the entire society.
The educational system has to provide to the beneficiaries possibilities to develop the skills with prospective accent; 3. the non-conformity of traditional pedagogical notions used previously ("the perspective education", "the prospect of future, "the planning of education", "the education through change and development", "the education for tomorrow" and other variations), which do not reflect the essence of the object, on the contrary, it limits its comprehension, that's why the local culture comes with multiple and controversial connotations of "prospective" term.
In essence, at level of concept, PE was reported to the current and perspective requirements of the society by orienting toward a new modality of education providing to the individual the possibility to face the unpredicted events, by anticipation and participation. Therefore, PE may be analyzed: largely and narrowly.
---
─ Largely, PE provides a new value organization of the personality's expectations through value and significant hierarchy of skills contributing to the achievement of the educational ideal. ─ Narrowly, PE represents an organized and designed process of personality's development for future from biological, psychological and social point of view, of training the consciousness and the proactive behavior of the active integration in
the social life, which changes continuously [23].
For the development of the conceptual framework mentioned above, we propose the introduction of prospective skill as necessary functional category in the following formula: the prospective skill represents a finalized structure, generated by the mobilization of subject's internal resources quantum in a delimited framework of significant situations (pedagogically deliberate or spontaneous, with disciplinary or interdisciplinary character) and manifesting by anticipation, planning and sense assignments and action's direction [23].
---
To have own investigation methods and techniques of the study object, reunited
in a scientific methodology able to provide and to produce true, verifiable and pertinent information about the reality studied by science. PP has own methods and techniques of researching the study object, reunited in a pertinent scientific methodology. Although many of methods are taken from other sciences, especially from psychology and sociology, economy, these methods are integrated in a unitary pedagogical methodology, adapted to the objectives, requirements and peculiarities of researching the educational phenomenon by the prospective dimension and character.
In fact, the prospective pedagogical research contributes to the development of methods, to the validation of new research techniques, realizing transfers with other sciences participating in the interdisciplinary research of PE.
Many theories directly bear the print of prospective methods: prospective analysis, Delfi method, future's alternative modelling and others. Together with empirical and theoretical methods of studying the future (D. Todoran, p. 205) are emphasized the methods of designing and modeling the future, methods of learning and methods of assessment.
A varied series of prediction methods may be used in the prospective methodology, which are used in the training-educative process. Project development method may be useful in more stages of the decision-making process. The scenario may serve as approximate prediction technique of a "fascicle" in the stage of information, of possible evolution of matter or trend. As heuristic education methods, we use in our research the problematization, the method of projects etc.
---
To have a praxeology of the field, i.e., principles, norms and rules of practical ac-
tions, methods, tools in the sense of influencing, directing and controlling the phenomena it studies. PP has a praxeology of the field, norms and principles of practical action. It is, without doubt, the condition and, meanwhile, the peculiarity of PP. Before becoming a scientific theory, PP was approached in the practice: in the field of planning, business, environment, economy and policy studies [33], in developing the technology, as currently through Virtual Reality opens, new possibilities for the investigation and train-ing of Mental Rotational Ability, which is an important factor in the development of technical skills [3], or anticipating the evolution directions at the level of human resources, applying principles and methods verified experimentally, partially conceptualized by the philosophy, pedagogy, sociology etc. In this sense, PP is the theoretical conceptualization of PE experience.
PP relies on the fundamental principles related to the educational process by considering the prospective specificity. In this sense, we consider that the functionality of PE Paradigm may be ensured by respecting the following specific principles: principle of social stringency and global approach, principle of temporal perspective, principle of social stringency, social and individual axiological principle, principle of learning by experience and principle of anticipating, orienting and designing prospectively the education. [23] It is important to complete these principles with the principle of golabalism and the constructivist principle
The formulated principles serve as theoretical and normative foundations in order to achieve the expected effect to meet the PP objectives and represent the nucleus of elaborated model.
Following the respect of PP principles in the education institutions, PE will make its presence felt, therefore being realized the desiderata of the education process to be rethought the present from the perspective of the future and implicitly, being provided the quality and the performance of the human capital.
Together with other pedagogical sciences, PP may be considered as a science with theoretical, gnoseological character, answering to the question what is PE and, elaborating the prospective pedagogical theory, it contributes to the development of human knowledge, generally. PP is a science and has a praxiological character in order to answer to the question what is PE and therefore, relying on the laws of education and on the pedagogical theory with the strategies and the technologies (methods, forms, means) of training and educating the new generation, manifests hence as a science with an efficient educative action.
PP is a dynamic science, open to changes and innovations, planning prospectively the appropriate strategies for the future. In this sense, we emphasize the issue of educational systems that due to the pandemic put a special emphasis on Internet Technologies in Distance Education which is necessary to reorganize, revise, implement and provide an opportunity for students to study [38], to became prospectives.
In the context of the abovementioned facts, we propose the complex definition of the prospective pedagogy: Prospective Pedagogy (PP) represents a fundamental field of the education sciences, which based on the general and specifically prospective education strategies and laws, studies and substantiates appropriately the process of training the prospective personality.
Largely, PP represents a fundamental field of education sciences with theoretical, praxiological and prospective character, which studies and manages the value adaptation and change process and has the outcome to train an integrally developed and prospective personality, capable to face the social transformations, substantiating plainly the potential and the skills, contributing to the achievement of the educational ideal.
Narrowly, PP studies the organized and planned educative process of the personality's development for the future, of training a consciousness and proactive behavior of active integration in the social life in continuous change.
The value validation of exposed ideas led toward new theoretical-applicative considerations, which ascertain theoretical and methodological fundamentals of PE. They will be pertinent as distinctive field of PP if it constitutes as educational paradigm with theoretical-applicative character by:
• valorization of anticipation and planning as fundamental elements of EP; In this regard, we may assert that PE paradigm represents the series of interconnected models, centered on training the prospective skill of specialists, organized based on the general principles of learning and prospective approach of education, psychological-physiological and social characteristics of the individual, having as outcome the training of the prospective personality, expressed by anticipation, planning and direction toward future. The essence of PE Paradigm is shown in Figure 2. The process of training the prospective skills was ensured at two levels:
─ through the special discipline "Prospective Education" (30 hours) for pedagogues (cycle I). ─ integrated (planning the prospective skill in the curriculum of the discipline Professional Ethics) for students in Engineering and Information Technologies (cycle I). We observe a different skill training level when analyzing the experimental data as for the level of training the prospective skills at the discipline PE. Hence, at the level I, the anticipation skill increased from 54% up to 70% (specialty Psychopedagogy). Level IIthe anticipation skill increased from 40% up to 49%, and the level IIIit increased from 0% up to 11%. The same results were obtained by the infusion implementation of prospective skills in the discipline of Professional Ethics and Basics of Communication.
In closing, we may mention the following:
─ The created psychopedagogical conditions (implementation of PE model) contributed to the increase of the prospective skill training level at the level I and II to the stage of ascertainment, at level II and III at the stage of training; ─ Training the prospective skills at the level III was registered in approximatively 20% of training experimental subjects.
Constituting an important field of education sciences, PP elaborated/adjusted and continues to elaborate its specific categories, which compiles the fundamental language in educational knowledge and action. The PP categories are different dimensions, aspect, elements and their reports as: the prospective education; the prospective training; ideal, objectives and prospective educational principles; PE curriculum; methods and forms of didactic activity etc.
Hence, PP is more than a paradigm or a pedagogical norm, it supposes the interference of a specific social system, which includes all learning experiences provided by the society of individual.
The pedagogical experiment showed that the substantiation of PP as a field does not have to exist only as a scientific-theoretical construction for the development of education sciences, but, incontestably, it has to include a series of practical references ensuring its functionality. The basic conditionthe knowledge of scientific fundamentals of PP and the familiarization of university educational beneficiaries with its content.
---
Conclusions
In our opinion, the theoretical and methodological substantiation of PE supposes the awareness that it matters for all fields of professional training. The situation requires the needs to integrate PP in the university education system.
1. Analyzing the opinion of scientists in the field shows that PP is a science in a fundamental field or generally, of education sciences. The PP substantiation as an important field of education sciences relied on the main conditions that have to be met by a certain field of knowledge to acquire the status of science. In this context, in case of deep changes of our times related to the introduction and the use of new methods, prospective principles and technologies, accompanied by new organization forms of the education process, the prospective development of a personality becomes a valuable and strategic factor for each educational institution, respectively for the labor market. | 31,059 | 1,045 |
ee4119741508604010e610b0cf1955de82defc07 | Discrimination and Stress Among Asian Refugee Populations During the COVID-19 Pandemic: Evidence from Bhutanese and Burmese Refugees in the USA | 2,021 | [
"JournalArticle",
"Review"
] | Objectives To measure COVID-19 pandemic-related discrimination and stress among Bhutanese and Burmese refugees in the USA and to identify characteristics associated with these two measures. Methods From 5/15-6/1/2020, Bhutanese and Burmese refugee community leaders were invited to complete an anonymous, online survey and shared the link with other community members who were English-proficient, ≥18 years old, and currently living in the USA. We identified characteristics associated with pandemic-related discrimination and stress applying ordinal logistic regression models. Results Among 218 refugees from 23 states, nearly one third of participants reported experiencing at least one type of discrimination, and more than two-thirds experienced at least one type of pandemic-related stress. Having had COVID-19, having a family member with COVID-19, and being an essential worker were associated with discrimination. Discrimination, financial crisis, and female gender were associated with stress. Conclusions Reducing pandemic-related discrimination should remain a priority, as should the promotion of social support and coping strategies. Noting that this is a nonrepresentative sample, we recommend that larger national studies tracking experiences with pandemic-related discrimination and stress include Asian American subgroups with limited English proficiency. | Introduction
During the COVID-19 pandemic caused by novel coronavirus SARS-CoV-2, fear, rumors, and misconceptions about the novel coronavirus have placed Asian Americans in the spotlight of blame and harassment [1][2][3][4][5]. Instead of preventing discrimination and xenophobia, government officials repeatedly labeled the virus the "Wuhan coronavirus" or the "China virus," potentially accelerating COVID-19 related racial attacks on Asian Americans. In March 2020, the Federal Bureau of Investigation issued a warning about a potential surge of hate crimes against Asian Americans [6]. In April 2020, the Center for Public Integrity reported that 32% of Americans have witnessed someone blaming Asian people for the pandemic [7]. From March 19 through August 5, 2020, over 2,500 instances of anti-Asian discrimination were reported to the Stop AAPI Hate Tracker, an online tool for reporting incidents of hate, violence, or discrimination against Asian Americans and Pacific Islanders in the USA [8]. Such discrimination may contribute to long-term distress, including depression, trauma, anxiety, and posttraumatic stress disorder [3,9,10]. The experience and magnitude of race-based traumatic stress can further impact individuals' perceptions of their ability to cope with such events [11].
Coping with anti-Asian discrimination and stress may be particularly challenging for Asian-origin refugees. Refugees with limited English proficiency face difficulty reporting harassment or seeking assistance in their preferred languages [12]. Refugees with lower socioeconomic status have reduced access to support services and coping resources both due to cost and also due to competing demands, such as work schedules. Refugees from Asia are less likely to seek mental health services due to stigma regarding mental illness, concern about being perceived as "crazy" and decreased emphasis on psychological solutions for emotional stress [3,13,14]. These barriers may be further amplified among refugees who are fearful about speaking out and drawing attention to their experiences due to premigration political repression, including repression that targeted individuals who advocated for themselves and their communities [15]. Additionally, refugee communities often have large populations of essential workers who are unable to work from home, so they may be exposed to harassment at worksites or when traveling to and from work [16,17]. Further, refugees with a perceived risk of COVID-19 exposure through work, e.g., health care personnel, may experience discrimination and stigmatization by individuals fearful of infection [18,19].
Despite these commonalities, experiences with pandemic-related discrimination are also believed to vary across different Asian American subgroups. The US Asian population comes from more than 20 countries, each with a unique history, language, and cultural background. Socialeconomic and health status look widely different across different Asian American subgroups, e.g., when comparing the experiences of English-proficient, white-collar professionals who migrate to the USA with the sponsorship of an employer versus those of predominantly working-class refugee communities. Though these differences significantly impact the risk of COVID-19 infection, very few states have included COVID-19 statistics for disaggregated Asian American subgroups in their public health reports. The majority of current research also ignores the heterogeneity of COVID-19 across different Asian American communities.
The Bhutanese and Burmese refugee communities are two Asian American subpopulations with multiple risk factors for COVID-19 related discrimination. Bhutanese and Burmese refugee communities are also among the largest refugee communities resettled in the USA between 2000 to 2015, and they have among the highest foreign-born shares of any Asian-origin communities in the USA (Bhutanese 92%, Burmese 85%) [20]. Both communities have relatively high poverty rates (Bhutanese 33%; Burmese 35%; Asian American 12%; US population 15%). They also have lower rates of English proficiency (Bhutanese 27%; Burmese 28%; Asian American 70%) and are less likely to have a bachelor's degree relative to the general US population (Bhutanese 9%; Burmese 24%; Asian American 51%; US population 30%) [20,21]. The majority of Bhutanese refugees living in the USA are Nepali-speaking Lhotshampa who were forced to flee Bhutan due to political repression and ethnic violence culminating with the mass expulsion of Lhotshampa Bhutanese in the 1990s. After nearly two decades living in refugee camps in Nepal, this predominantly agrarian and multigenerational community was allowed to resettle in the USA beginning in 2007 [21]. Similarly, most Burmese migrants to the USA since 2006 are political refugees. Many come from rural regions where minority ethnic groups, such as the Karen and Chin, experienced recurrent repression and violence during armed conflicts between the national Burmese Army and ethnic opposition groups. More than a million people from Burma (now called Myanmar) have been displaced to neighboring countries, including Bangladesh, India, Malaysia, and Thailand. Most Burmese refugees in the USA lived in these areas prior to resettlement [20,21].
For these reasons, we hypothesize that Bhutanese and Burmese refugees are at high risk of pandemic-related discrimination and stress. However, to date, there has been limited data describing the experiences of these refugee populations during the pandemic. In this study, we measure the distribution of pandemic-related discrimination and stress, as well as identify predictors of these two measures among Bhutanese and Burmese refugees in the USA.
---
Material and Methods
---
Data Collection
We conducted a cross-sectional study using a snowball sample. We limited participants to English-proficient individuals, age ≥ 18 years, and currently living in the USA from 5/15/20 through 6/1/20, we emailed or messaged an anonymous, online survey link to 19 bilingual Bhutanese and Burmese refugee community leaders identified through the study team's existing professional networks. These individuals were predominantly prior participants in community health leadership trainings or leaders of refugee-led community organizations. They were asked to complete the survey and share the link with peers who met inclusion criteria. To decrease potential selection bias, the survey invitation asked participants to share their experiences during the pandemic and did not specifically invite participants who had experienced discrimination. This study was approved by Ball State University's Human Research Protection Office (IRB#: 1605425).
---
Measures
---
Outcome
To assess pandemic-related discrimination, participants were asked to answer three questions adapted from the Understanding America Study1 , which asked if they had experienced the following at any time during the COVID-19 pandemic: (1) felt threatened or harassed from others as they think you might have the coronavirus, (2) felt others were afraid of you because they think might have the coronavirus, and (3) been treated with less respect than others because people think you might have the coronavirus. Responses were coded as binary variables with 1 (Yes) or 0 (No). We then generated an ordinal variable to measure the number of types of discrimination experienced by adding the outcomes of these three measures of discrimination. The ordinal discrimination measure was used for bivariate and multivariate analyses.
We measured pandemic-related stress by asking participants to rate the following stress experiences during the COVID-19 pandemic: (1) nervous about current circumstances, (2) worried about my health, (3) worried about my family's health, and (4) stressed about leaving the house. Response options ranged from 1 = "does not apply at all" to 5 = "strongly apply." We first coded these experiences as binary variables with 1 (strongly apply) or 0 (does not apply at all, somewhat does not apply, neither applies nor does not apply, or somewhat applies). We then generated an ordinal variable to measure the amount of stress experienced by summing these newly coded binary measures of stress. The ordinal stress measure was used for bivariate and multivariate analyses.
---
Covariates
Covariates included in the adjusted models for pandemicrelated discrimination were having had COVID-19, having a family member who had COVID-19, being an essential worker during the pandemic, gender, age, education, and years spent in the USA, as these covariates are known to be associated with discrimination and stress from previous studies [2,[23][24][25][26]. COVID-19 infection was measured as a binary, self-reported outcome, using Yes/No responses to the following question, "Are you or have you been infected with the novel coronavirus?" Having a family member with COVID-19 was measured as a binary variable of whether anyone in the household is or has been infected with the novel coronavirus. Individuals working for pay at a job or business in the 7 days prior to survey completion were categorized as essential workers if their occupation corresponded to one described as providing "COVID-19 Essential Services" under Massachusetts Governor Baker's March 23, 2020 Emergency Order, updated on March 31 and April 28 [27]. Those whose occupation corresponded to essential services but who did not work in the past 7 days due to COVID-19 infection were also categorized as essential workers. Age was categorized as less than 31, between 31 and 40, and more than 40 considering the age distribution of our participants. Education was measured as secondary degree (junior high or senior high school), associate degree (community college, junior college, or technical school), and bachelor degree. Year spent in the USA was a continuous variable and represents an approximate measure of acculturation.
The model for pandemic-related stress included these covariates, pandemic-related financial crisis, and the ordinal measure for pandemic-related discrimination. Financial crisis was included because it is a common cause of emotional distress. Financial crisis was a binary variable capturing if the participants' family had experienced financial crisis during the coronavirus pandemic. Since the relationship between discrimination and stress has been established in other contexts, pandemic-related discrimination was also included here [28,29].
---
Statistical Model
We first examined the distribution of each outcome and covariate. We then conducted bivariate analysis to measure the association between participants' characteristics and pandemic-related discrimination and stress. We applied Fisher's exact tests and one-way analysis of variance (ANOVA) tests to measure differences in pandemicrelated discrimination and stress across categorical variables and continuous variables, respectively. Finally, we identified characteristics associated with pandemic-related discrimination and stress, applying adjusted ordinal logistic regression models. We tested proportional odds assumption of ordered logistic regression models to measure if the coefficients are equal across categories. Multicollinearity was tested and not found. Less than 5% of all measures were missing. Due to the small percentage, we considered all missing values to be missing at random. The significance level was set at 0.05 with a two-sided tail. Analysis was conducted using Stata/SE15.1.
---
Results
Table 1 shows the characteristics of the study participants. In total, 218 Bhutanese and Burmese refugees from 23 states 2 completed the survey. The majority were Bhutanese (86.2%), and just over half were male (60.1%). Approximately half were more than 30 years old (52.4%), received a bachelor's degree or higher (50.0%), and had an annual household income less than $50,000 (52.3%). The average time participants spent in the USA was 9.99 years. Nearly half of the participants were essential workers (41.7%). Nonetheless, pandemic-related job loss (46.3%) and family financial crisis (36.7%) were common. Nearly 7% of participants reported having been infected with the coronavirus. The same amount of the participants reported having family members infected with the coronavirus.
Table 2 displays experiences with pandemic-related discrimination. Nearly one third of the participants (31.3%) reported experiencing at least one type of pandemic-related discrimination. A total of 15.1, 9.6, and 5.5% of the participants reported experiencing one, two, or three types of discrimination, respectively. Most often, participants reported feeling that other people were afraid of them (27.5%). Additionally, 12.8% of respondents reported feeling threatened or harassed, and 10.6% reported feeling as if they had "been treated with less respect than others as people think you might have the novel coronavirus."
Table 2 also displays pandemic-related stress. More than two-thirds of participants (68.8%) experienced at least one type of pandemic-related stress. A total of 25.2, 17.4, 12.4, and 13.8% of the participants reported experience one, two, three, or four types of stress, respectively. Specifically, nearly one third of participants strongly endorsed feeling nervous about the current circumstances (33.9%), feeling worried about their health (28.0%), or feeling stress about leaving home (29.8%). Over half of participants strongly endorsed feeling worried about their family's health (60.6%). Table 4 shows the bivariate analysis of participants' characteristics and pandemic-related stress. Those who experienced more types of discrimination (P value < 0.001), those who experienced financial crisis during the pandemic (P value = 0.013), and women (P value = 0.040) were more likely to experience more types of pandemic-related stress. Table 4 also displays the multivariate ordinal logistic regression model for pandemic-related stress. The results indicate a strong association between the amount of pandemic-related stress and the amount of pandemicrelated discrimination (one type of discrimination: OR 2.70, 95% CI 1.31, 5.58; two types of discriminations:
---
Discussion
This study describes characteristics associated with pandemicrelated discrimination and stress in two Asian refugee communities. Notably, the Understanding America Study reported that 0.9, 5.9, and 4.0% of Asian Americans reported feeling threatened or harassed by others, feeling others be afraid of them, or feeling they were treated with less respect than others as others thought they might have the coronavirus in the prior 7 days based on the data on May 23, 2020 [22]. While our survey did not use the same 7-day time frame, participants reported markedly high rates of discrimination. Our results are consistent with another online survey of Asian Americans during the pandemic [10].
We identity risk factors for experiences with discrimination in these communities, including having had COVID-19, having a family member with COVID-19, and being an essential worker. In addition to experiencing COVID-19related discrimination from others, those and their family members are infected tend to blame themselves or their family members for contracting the diseases, which makes it harder for them to fight COVID-19 related stigma [30]. In other studies, essential and frontline workers have reported high rates of social isolation, stigma, and discrimination due to their heightened risk of COVID-19 and others' fear of infection [31,32]. In our study, around 40% of the participants were essential workers. In the USA, a large number of refugees work in the healthcare settings, food supply chain functions, grocery stores, supermarkets, restaurants, and food services establishments, which may expose them to a high risk of pandemic-related discrimination [16,30]. However, there has been a lack of education, legislation, and policy to address this discrimination.
Experiencing pandemic-related discrimination is associated with participants' experience of pandemic-related stress. While our cross-sectional study does not establish a causal relationship between pandemic-related discrimination and stress, this finding echoes previous studies showing that discrimination can lead to negative and long-term consequences for mental health [3,10,15,[33][34][35]. While societal strategies for decreasing discrimination are paramount, other researchers have also found that social support and coping strategies can buffer the immediate negative emotional impact of discrimination on Asian Americans [35].
Our study also suggests that experiencing financial crisis during the pandemic increases the likelihood of experiencing higher amounts of pandemic-related stress among Bhutanese and Burmese refugees. Between two predominantly lowincome populations, this is likely to be explained by the impact of financial crisis on individuals' access to basic necessities, such as food, shelter, or healthcare [35][36][37].
Women were more likely to experience higher amounts of pandemic-related stress than men. This result corresponds with recent findings of high levels of stress and fear of COVID-19 among women [38][39][40]. This gender difference may be explained by the disproportionate responsibility that many women face in taking care of children and other family members during the pandemic, as well as the disproportionate impact of pandemic-related job losses on women [41].
The study has limitations. Chief among them is reliance on a small, non-representative sample of English-proficient respondents, especially given that English proficiency is reported by just 27 and 28% of the overall Bhutanese and Burmese populations in the USA, respectively [20]. Additionally, levels of annual household income and educational attainment among our respondents were higher compared with others in their communities. For this reason, results may not be generalizable to the entirety of the Bhutanese and Burmese refugee communities in the USA. We also speculate that people with higher levels of concern about COVID-19 would have been more likely than people with lower levels of concern to complete the survey, so our results may overestimate the prevalence of COVID-19 related stress in these communities. The distribution of COVID-19 cases may also be an underestimate considering the marked shortage of SARS-CoV-2 tests in the USA when data were collected in May 2020 [42]. The prevalence of essential workers may also be underestimated, as it was defined by participants' working status in the week prior to the survey. Those who worked during the pandemic but not during the required timeframe due to none COVID-19 infection-related reasons were coded as non-essential workers. Finally, some of the measures of discrimination and stress have not been validated. We encourage other researchers to replicate our study with a representative sample and novel measures of key variables.
---
Conclusions
Reducing pandemic-related discrimination should remain a priority as we work to strengthen our public health response to the pandemic. Public officials should avoid terms such as "China Flu" and consistently condemn racism [1,43]. Public messaging should remain sciencebased. Because workplace incidents are potential civil rights violations and have been reported by multiple prior studies, we suggest that employers consider proactive and preventive actions [10,44]. Programs that enhance social support and teach coping skills may also buffer the immediate psychological impact of discrimination [10,35]. More importantly, policies, regulations, and education are needed to address pandemic-related stigma and discrimination. Finally, we recommend that larger national studies tracking experiences with discrimination and stress during the pandemic include Asian American subgroups with limited English proficiency [26,45,46].
---
Availability of Data and Material Available upon request.
Code Availability Available upon request.
---
Declarations
---
Conflicts of Interest
The authors declare that they have no conflict of interest.
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 20,330 | 1,373 |
4c493c608388304dc5b8a7b4aeacaf6e02a58efe | Social discourse and reopening after COVID-19 A post-lockdown analysis of flickering emotions and trending stances in Italy | 2,020 | [
"JournalArticle"
] | Social discourse and reopening after COVID-19: A post-lockdown analysis of flickering emotions and trending stances in Italy Although the COVID-19 pandemic has not been quenched yet, many countries lifted nationwide lockdowns to restart their economies, with citizens discussing the facets of reopening over social media. Investigating these online messages can open a window into people's minds, unveiling their overall perceptions, their fears and hopes about the reopening. This window is opened and explored here for Italy, the first European country to adopt and release lockdown, by extracting key ideas and emotions over time from 400k Italian tweets about #fase2 -the reopening. Cognitive networks highlighted dynamical patterns of positive emotional contagion and inequality denounce invisible to sentiment analysis, in addition to a behavioural tendency for users to retweet either joyous or fearful content. While trust, sadness and anger fluctuated around quarantine-related concepts over time, Italians perceived politics and the government with a polarised emotional perception, strongly dominated by trust but occasionally featuring also anger and fear.Introduction Case study: Italy, COVID-19 and the lockdown Research questions Background and related literature Methodology Results Discussion Limitations of this study Conclusions | Introduction
Social media represents a valuable source of information for understanding how people perceive and discuss events. Internet discourse has given voice to millions of users, creating flows of information populated by many different viewpoints (Dodds, et al., 2011;Stella, et al., 2018;Ferrara, 2020). Identifying and understanding users' knowledge and emotional content poses a research challenge with crucial implications. Under a time of crisis like the current one, where the COVID-19 pandemic is revolutionising people's way of life all over the world, Internet discourse is key for understanding how large audiences are perceiving multiple aspects of the global emergency. With the right tools, online discourse can unlock perceptions of the pandemic, subsequent lockdowns and their aftermaths.
This study adopts cognitive networks, tools at the fringe of computer science and psycholinguistics (Siew, et al., 2019), as a compass for exploring social discourse around post-lockdown reopening. Focus is given to unraveling the emotional dimensions of social discourse debating the multiple facets of reopening a whole country with the threat of a global pandemic.
Language as embedded in online messages is used to reconstruct how individuals perceived the reopening and emotionally coped with multiple aspects of it. The identified emotional trends and the tools highlighting them help understanding the key issues faced by people during a reopening, their fears but also their hopes, all data useful for achieving effective future policy-making. To test this aim and the powerfulness of the above techniques, Italy is selected as a case study.
---
Case study: Italy, COVID-19 and the lockdown
Italy was the first European country to release lockdown after being severly struck by the COVID-19 pandemic (Bonaccorsi, et al., 2020). In this way, the social dynamics taking place among Italian users on social media anticipate the discourse of other countries about the reopening. The whole country was locked down one day earlier than COVID-19 being declared a global pandemic by WHO on 11 March 2020. Several studies investigated how Italians reacted to the sudden lockdown. Pepe and colleagues (2020) identified drastic drops of social mobility, which were confirmed also by other studies (cf., Bonaccorsi, et al., 2020). Stella and colleagues (2020) investigated the Italian Twittersphere in the first week of lockdown and found evidence for Italians expressing concern, fear and anger towards the economic repercussions of lockdown. These fears became reality, as the lockdown strongly amplified social and economic inequality across the country, as recently quantified by Bonaccorsi and colleagues (2020).
After two months of nationwide lockdown, the slow down of the COVID-19 contagion and the pressure of restarting the economy both motivated the Italian government to release the lockdown. On 4 May 2020, social mobility was almost completely restored. People could travel within their own regions, attend public places and enjoy a mostly normal lifestyle. All while the novel coronavirus still circulated within the population and hundreds of casualties were still registered.
This study investigates the emotions and ideas before, during and after the 4 May reopening.
---
Research questions
By adopting a cognitive network science approach, considering text as data representative of people's mindsets as expressed in texts (Stella, et al., 2019;Stella, 2020;Stella, et al., 2020), this work explores and compares how different ways of reconstructing knowledge and emotions can address the following research questions: RQ1: Which were the main general emotions flowing in social media about the reopening? RQ2: Were there emotional shifts over time highlighted by some emotion models but neglected by others? RQ3: Which were the most prominent topics of social discourse around the reopening? RQ4: How did online users express their emotions about specific topics in social discourse? RQ5: Were messages expressing different emotions reshared in different ways?
The main contribution of this investigation is identifying the key topics of discussion around the reopening through a cognitive network science approach. Rather than focusing over COVID-19 or its specific hashtags, key ideas and their emotional perceptions are identified in language, within the social discourse taking place in Italy around the loose topical hashtag #fase2. This hashtag, which stood for a synonym of reopening in Italian news media, included a wide variety of topics of discussion, a constellation of facets of debate regarding the restart, each one associated with certain language, semantic frames and emotions. The investigation over time of these interconnected semantic frames/emotional perceptions is the main focus of this investigation. Key ideas and emotions in discourse are extracted through emotional profiling and sentiment analysis. These two approaches are compared in their ability to detect emotional fluctuations over time of the whole social discourse (RQ1-2), with emotional profiling highlighting more discussions of social debate than mere sentiment. Word frequency and cognitive networks are merged together in order to identify ideas of prominence for social discourse over time (RQ3-4). Emotional profiling around these prominent concepts outlines microscopic patterns of trust formation around the institutions and concern about the contagion that were not visible with the global-level emotional analysis. Behavioural trends towards messages containing different emotions are investigated and discussed in light of previous positive biases based on mere sentiment (RQ5).
---
Background and related literature
Stances in language. Identifying people's perceptions and opinions about something is a problem known as "stance detection" in computer science (Kalimeri, et al., 2019) and "authorial stance" in psycholinguistics (Berman, et al., 2002). The identification of a stance is crucial in every communication, in order to identify whether someone is in favour or against a given topic, e.g., a person expressing support of the economic measures promoted by a government or giving voice to criticism about a given campaign of social distancing. Historically, stance detection has focused over speeches and written text, like books or pamphlets, and used language analysis in order to reconstruct a stance, e.g., using positive words. This task was performed by linguists and required human coding (Berman, et al., 2002).
Stances in social media. The advent of social media and the huge volumes of texts produced by online users made human coding impractical, motivating automatic approaches to stance detection with limited human intervention (cf., Mohammad, et al., 2016). The state-of-art in identifying (dis)agreeing stances in social media is represented by machine learning approaches (Hassani, et al., 2020), which capture linguistic patterns from a training set of labelled texts, create an opaque representation of different stances and then use it for categorising previously unseen texts (Ferrara and Yang, 2015;Mohammad, et al., 2016). This approach is powerful in detecting also additional features of stances like sentiment intensity (Kiritchenko, et al., 2014;Hassani, et al., 2020), e.g., how positively or negatively a given stance is. The main limit of machine learning is that the reconstructed representation of different stances cannot be directly observed. This issue prevents access to how knowledge and sentiment were structured in different stances, e.g., which concepts were associated with each other in a specific stance? To provide a transparent representation of knowledge and stances embedded in text, recent approaches have adopted cognitive network science (Siew, et al., 2019;Stella, et al., 2019).
---
Cognitive networks as windows into people's minds.
Cognitive networks model how linguistic knowledge can be represented in the human mind (Aitchison, 2012). Recent approaches overwhelmingly showed that the structure of conceptual associations in language is not only predictive of several cognitive processes like early word learning or cognitive degradation (cf., Siew, et al., 2019) but is also useful for reconstructing different stances in social media discourse (Stella, et al., 2018) or in educational settings (Rodrigues and Pietrocola, 2020;Stella and Zaytseva, 2020). Relying on these approaches, this manuscript adopts cognitive networks of syntactic associations between concepts for reconstructing the stances promoted by social discourse around specific aspects of the lockdown. Among many successful approaches building complex networks from text (Arruda, et al., 2019;Brito, et al., 2020;Rodrigues and Pietrocola, 2020), this work adopts the framework of textual forma mentis networks, representing syntactic and semantic knowledge in combination with valence and emotional aspects of words (Stella, 2020). Reminiscently of the networked linguistic repository used by people for understanding and producing language, i.e., their mental lexicon (Aitchison, 2012), a forma mentis network opens a window onto people's minds (and mental lexica). This is achieved through forma mentis networks giving structure to language, reconstructing the conceptual and emotional links between words in a text. In this way, forma mentis networks reconstruct a collection of stances expressed in a discourse, i.e., a mindset (in Latin forma mentis).
Combining networks and emotions. Coupling syntactic/semantic networks and emotional trends makes it possible to understand how individuals perceived and directed their emotions towards specific entities. For instance, Stella, et al. (2019) found that high school students directed anxiety and negative sentiment towards math, physics and related concepts but not towards science. As a comparison, STEM researchers directed mostly positive sentiment towards all these topics. The interconnectedness between specific knowledge and the emotions surrounding/targeting it is the main element enabling for forma mentis networks (FMNs) to better understand how people perceive events and topics.
---
Emotions in language.
The emotional profile of a portion of language can be considered an extension of its sentiment. Whereas sentiment aims at reconstructing the valence of language, i.e., understanding its pleasantness, emotional profiling contains other dimensions like arousal, i.e., the excitement elicited by a given entity (Posner, et al., 2005), but also projection into the future, desires and beliefs (Plutchik, 2003;Scherer and Ekman, 2014). In cognitive neuroscience, the circumplex model of arousal and valence is one of the most simple yet powerful model for reconstructing the emotions elicited by words in language through their combined pleasantness and excitement (for more details see Methods and Posner, et al., 2005). The innovation brought by Big Data Analytics approaches to psycholinguistics opened the way also to alternate approaches mapping specific emotional states. The NRC Emotion Lexicon by Mohammad and colleagues identifies which words give rise to eight basic emotional states, like fear or trust among others (Mohammad and Turney, 2013). Relying on the theory of basic emotions from cognitive psychology (Plutchik, 2003;Scherer and Ekman, 2014), these eight states act as building blocks, whose combination can describe a wide range of emotions like elation, contempt or desperation.
Emotions and behavioral trends in social media. On social media, understanding the emotional perception of different topics can be insightful also for understanding how knowledge with different emotional profiles spreads. Ferrara and Yang (2015) showed how messages with different emotions can be re-shared in different ways on social platforms. The authors identified a positive bias on Twitter, where online users reshared more those messages with a stronger positive sentiment. Other studies identified that not only sentiment but also the semantic content of tweets can boost message diffusion. For instance, Brady and colleagues (2017) found that content eliciting moral ideas was shared more by online users during voting events, linking this phenomenon not only to the sentiment expressed in tweets but also to their emotions. The importance of measuring emotional trends in social media motivated approaches like the Hedonometer, built by Dodds and colleagues (2011) in order to gauge people's happiness through real-world massive events.
Aim and manuscript outline. In the current study, investigating semantic networks, valence, arousal and emotions will wholly be aimed at understanding how online users waited for, perceived and discussed the lockdown release. The Italian Twittersphere is used as a case study. The element of novelty of this manuscript is providing a network of interconnected topics, mapping how individuals discussed a variety of concepts, as expressed in their tweets, when discussing the loose topical hashtag #fase2 about the reopening.
The Methods section outlines the novel methodological tools adopted to the above aim. The Results section investigates the individual research questions outlined above. Results are then combined together and commented in the Discussion section in view of the lockdown release. Current limitations and future research directions opened by this study are also outlined in that section. The Conclusions summarise the contributions of this work and its research questions.
---
Methodology
This part of the manuscript outlines the linguistic datasets and methods adopted and implemented in this work, referencing also previous relevant works and resources.
Twitter dataset. This work relied on a collection of 408,619 tweets in Italian, gathered by the author through Complex Science Consulting's Twitter-authorised account (@ConsultComplex). The tweets were queried through the command ServiceConnect[] as implemented in Mathematica 11.3. Only tweets including the hashtag #fase2 (phase 2) were considered. The flags "Recent" and "Popular" were both used in ServiceConnect in order to obtain either recent tweets produced on the same day of the query or trending tweets, produced on earlier dates but highly re-shared/liked. This combination led to a Twitter dataset including both: (i) large volumes of tweets produced by individuals and (ii) a small fraction of highly reshared/liked tweets. Almost 1.5 percent of the retrieved tweets received more than 100 retweets. These "popular" tweets received on average 286 retweets and 401 likes. Even though these numbers are considerably smaller than those in the Twittersphere of English speakers, they are still remarkable in a population as small as the Italian one (where only 2.85 percent of Internet users has an active Twitter account in 2020, cf., https://gs.statcounter.com/social-media-stats/all/italy, last accessed 1 July 2020).
Tweets were gathered between 1 May and 20 May 2020 in order to evaluate how online users perceive the release of national lockdown before, during and after the actual end of the lockdown on 4 May 2020.
Tweets were ordered chronologically and categorised in each of the 20 considered days. Twitter IDs have been released on a OSF repository and are available for research purposes.
Language processing. Each single tweet was tokenised, i.e., transformed into a series of words. Links and multimedia content were discarded from the analysis, which focused over linguistic content. Emojis and hashtags were translated in words. Emojis were translated by using Emojipedia (https://emojipedia.org/people/, last accessed 1 July 2020), which describes emoticons in terms of simple words, and appended to tweets. Hashtags were translated by using a simple overlap between the content of the hashtag without the \# symbol and Italian words (e.g., #pandemia became "pandemia", Italian for pandemic). Words in tweets were then stemmed by using SnowballC as implemented in R 3.4.4, called in Mathematica through the RLink function. Word stemming is important for getting rid of linguistic suffixes in Italian describing the plural and gender of a noun (e.g., "ministro" and "ministra" both indicate the concept of a minister) or the tense of verbs (e.g., "andiamo" or "andate" both indicate the concept of going). Previous evidence from psycholinguistics indicates that appending different suffixes to the same stem does not alter the semantic representation attributed to them (Aitchison, 2012), which is rather dependent only on the stem itself (e.g., ministro and ministra both elicit the same conceptual unit relative to minister). This flexibility of language in representing lexical units for denoting concepts has been shown to hold across multiple languages, including Italian (Aitchison, 2012). Stems and syntactic relationships between them were used in order to construct forma mentis networks.
---
Forma mentis network.
Textual forma mentis networks were introduced in Stella (2020) as a way of giving network structure to text. Forma mentis networks (FMNs) represent conceptual associations and emotional features of text as a complex network. In a FMN, nodes represent stemmed words. Links are multiplex (Stella, et al., 2017) and can indicate either of the following conceptual associations: (i) syntactic dependencies (e.g., in "Love is blissfulness" the meaning of "love" is linked to the meaning of "blissfulness" by the specifier auxiliary verb "is") or (ii) synonyms (e.g., "blissfulness" and "happiness" overlapping in meaning in certain linguistic contexts). These links were built by using the TextStructure syntactic parser implemented in Mathematica 11.3 and the Italian translation of WordNet (Bond and Foster, 2013). Emotional features are attributed to individual words/nodes. Valence, arousal and emotional eliciting (e.g., does a given word elicit fear?) were attributed according to external cognitive datasets. Notice that the approach adopted here was mostly "bottom-up", as the considered forma mentis network was built through the command TextStructure, which extracted syntactic relationships directly from text. However, FMNs used also semantic associations from WordNet, whose adoption for meaning attribution is considered a "top-down" approach in natural language processing. In this way, the combination of syntactic and semantic associations makes FMNs a hybrid or multiplex approach in capturing meaning from text (Stella, 2020).
Cognitive datasets. This study used two different datasets for emotional profiling, namely the Valence-Arousal-Dominance (VAD) dataset by Mohammad (2018), including 20,000 words, and the NRC Emotion lexicon by Mohammad and Turney (2013), including 14,000 words. Both the datasets were obtained through human assessment of individual words, like rating how positively/negatively/neutrally a given concept was perceived or if a given word elicited fear, trust etc. Combinations of valence and arousal can give rise to a 2D space known as the circumplex model of emotions (Posner, et al., 2005), which has been successfully used for reconstructing the emotional states elicited by single words and combinations of them in text. In the circumplex model, emotions are attributed to words by passing through their locations in the 2D space (e.g., high valence/arousal corresponds to excitement). The NRC Emotion Lexicon enables a more direct mapping, indicating the specific words that elicit an emotional state in large audiences of individuals (Mohammad and Turney, 2013). The dataset includes 6 basic emotions (Joy, Sadness, Fear, Disgust, Anger and Surprise) and two additional emotional states (Trust and Anticipation). Whereas the six basic emotions are self-explanatory and identified as building blocks of more nuanced emotions by Ekman's theory in cognitive psychology (Scherer and Ekman, 2014), trust and anticipation include more complex dimensions. Trust can come from a combination of mere affect towards an entity (e.g., trusting a loved one) or rather from logic reasoning (e.g., trusting a politician who behaves rationally), see also (Plutchik, 2003). Anticipation is a projection towards the future that can be either positive or negative, like looking forward to meeting new friends or dreading the day of an exam (Scherer and Ekman, 2014). For this analysis, emotions and emotional states are used interchangeably. Valence/arousal scores and direct emotions were attributed to words in Italian, which were then linked in the forma mentis network according to the language used by social media users.
Representing language as a network defines semantic frames and emotional auras. Representing social discourse as a complex network is advantageous. In fact, this representation conveniently enables the adoption of many network metrics to the aim of detecting text features. The simplest example is using conceptual associations to understand which emotions permeated discourse around specific concepts. For instance, Stella and Zaytseva (2020) found that students associated "collaboration" mainly with positive concepts and thus attributed to it a positive aura, i.e., a positive perception, which was confirmed by independent feedback. This study uses a more general measure of conceptual aura combining emotions and semantic frames. In a FMN, the network neighbourhood of a concept C identifies which words were associated to C by online users through syntactic and semantic associations in messages.
According to semantic frame theory (Fillmore, 2006), these associations extracted from language bring contextual information which specifies how C was perceived, described and discussed by individuals. Checking the semantic content elicited by words in the network neighbourhood of C can, therefore, characterise the meaning attributed to C itself in social discourse. Hence, network neighbourhoods in a FMN represent the semantic frames attributed by individuals to concepts in language. Extracting semantic but also emotional information from these frames/neighbourhoods gives insights about people's perceptions and perspectives, i.e., auras, as attributed to concepts.
---
Quantitative measuring of emotional auras.
This study reconstructed the emotional aura or profile of a given concept by counting how many of its associates in the FMN eliticed a given emotion, analogously to past approaches (Mohammad and Turney, 2013;Stella, et al., 2020). Words linked to a negation ("non", "nessun" and "senz" in Italian) were substituted with their antonyms as obtained from the Italian WordNet. This operation preserved the flipping in meaning as expressed in text when negating words. The computed emotional richness was then compared against random expectation preserving the same empirical number of emotion-eliciting associates to a word but also randomising their emotions. A collection of 1,000 random samplings was performed for every empirical richness value reported in the main text, with error bars indicating standard deviation. A z-score indicating emotional richness higher or lower than random expectation at a significance level of 0.05 was also plotted in order to provide a clear visual clue about how individual concepts were perceived in social discourse. These z-scores were organised according to a flower layout and referenced in text as emotional flowers, with the center being the rejection region z < 1.96 and petals representing emotional z-scores. Emotional flowers give an immediate visual impression of which emotions populate a given semantic frame more than random expectation. In fact, all the bars falling outside of the inner semi-transparent circle (i.e., the rejection region) indicate an emotional richness stronger than the random baseline. Notice also that in emotional flowers every ring outside of the semi-transparent circle indicates a z-score unit after 2, i.e., the first ring outside the flower centre is relative to a z-score of 3, etc., thus making it immediate to assess the strongest emotions in a semantic frame and attribute also a z-score to them.
An example of the FMN as extracted from online discourse around "govern" (governate/government) is reported in Figure 1. Figure 1 reports the network neighbourhood of "govern" (government/govern), i.e., the frame of semantic/syntactic associations linked with "govern" in tweets. Nodes are stemmed words and links indicate syntactic or semantic relationships. Words are colored according to the emotion they elicit. In case one word elicits multiple emotions, the coloring is attributed according to the strongest emotion permeating a given semantic frame (like in Figure 1) or the whole social discourse (like in Tables 1A and2A in the Appendix).
Figure 1: Users' language in tweets reflects their mental perceptions (left), reconstructed here as a forma mentis network outlining the emotions (bottom right) and semantic frame attributed to a concept, e.g., "govern" (top right). Words are emotion-coloured (cf., Methods).
The number of words eliciting different emotions is reported as "emotional richness". Z-scores between empirical and expected emotional richness are reported as an emotional flower (bottom right).
In the emotional flower in Figure 1, the bar of joy arrives up to the first ring outside of the semi-transparent circle, i.e., joy is relative to a z-score of 3. Reading words in the network and considering those emotions stronger than random expectation, i.e., with bars outside of the inner white circle in the emotional flower, make possible to assess that in all tweets between 1 May and 20 May, Italians discussed "govern" with more trust-, anticipation-and joy-eliciting words than expected. Also, jargon of different emotions co-existed together.
Figure 1 illustrates also the cognitive approach adopted by this study. As schematized in Figure 1 (left), each Twitter user produces messages according to their mental lexicon, i.e., a cognitive system storing and processing linguistic knowledge and emotional perceptions about the world. Users communicate their knowledge and perceptions through language in tweets. Hence, Twitter messages contain conceptual associations and emotions. Extracting and aggregating these types of information enables the construction of a knowledge network representing social discourse, i.e., a forma mentis network (Stella, 2020). Notice that words are clustered in network communities of tightly connected concepts as identified with Louvain (Blondel, et al., 2008). Every network visualisation featured words translated from Italian to English. The translation process relied on the translations English-Italian provided already by the NRC Emotion Lexicon (cf., Mohammad and Turney, 2013).
---
Beyond network neighbourhoods.
FMNs make it possible to study social discourse also in terms of network centrality. In this study, frequency and closeness centrality were compared and used at the same time in order to identify prominent concepts in social discourse. Frequency is based on repeated tweets and indicates how many times single words appeared in the dataset on each day independently of other words. Closeness depends on the number of syntactic/semantic links connecting a word to all its neighbours (Siew, et al., 2019). A lower number of these connections indicates that a word is more directly syntactically related/associated to other concepts, expressing prominence in the underlying discourse or texts. Stella (2020) showed that, on benchmark texts, high closeness centrality in FMNs was able to identify text topics by highlighting prominent concepts. In cognitive network science, syntactic/semantic distance and closeness have been shown to be highly predictive of word prominence also beyond topic detection, in contexts like early word learning (Stella, et al., 2017).
Temporal analysis. Emotional profiling and forma mentis networks are applied in order to reconstruct the main emotions and ideas around lockdown release as discussed online, on each day between 1 May and 20 May, in a fashion similar to the Hedonometer by Dodds and colleagues (2011). The stream of tweets is processed chronologically. When emotions are profiled, single tweets are considered. This means considering temporal trajectories of 400k points, one for each emotional state (e.g., fear, trust, anticipation, etc.), one for the total valence scores and one for total arousal scores of words in a tweet. These noisy trajectories were averaged over time. An exponentially weighted moving average was used in order to smooth noisy outliers over a short time window. The smoothing factor was chosen as an average over 10,000 different attempts of minimising the mean squared error of the 1-step-ahead forecasts using 10,000 tweets and starting from any random time between 00:00 1 May 2020 and 23.59 20 May 2020. An average of 0.00075 was identified for the smoothing factor of emotional time series, indicating the ability for the smoothed signal to detect shifts in emotions determined by an average of 1/0.00075 1,333 tweets. For valence and arousal, an average smoothing factor of 0.0006 was detected, corresponding to shifts including 1,667 tweets. This error minimisation technique was simple enough to preserve long-term changes and trends in the time series while also smoothing out short-term fluctuations.
Emotional fluctuations. Emotional deviations were operationalised by deviations from the interquartile range of all detected signals in a given time window. Notice that the filtered signals and the observed deviations from interquartile range were not used in order to make forecasts or attribute statistical significance but only in order to qualitatively highlight potential shifts in social discourse. These potential deviations were then cross-validated by a frequency analysis of words/retweeting counts of tweets/forma mentis emotional auras in the considered time windows.
---
Results
---
RQ1: Which were the main general emotions flowing in social media about the reopening?
Figure 2 reports the emotional profile of social discourse over time. Remember that the emotional profile corresponds to how rich in each emotion was the overall social discourse (emotional richness, see Methods). Importantly, non-zero signals of all emotions were found across all time windows. This means that social discourse about the reopening was never dominated by a single positive or negative emotion, like trust or fear. The reopening was rather perceived as a nuanced topic of discourse, where positive and negative emotional texts co-existed together, in agreement with other studies (Lima, et al., 2020;Gozzi, et al., 2020;Stella,et al., 2020). Figure 2 indicates that sentiment mostly remains stationary over time whereas emotional richness shows more complicated dynamics, with peaks and deviations. This is the core of RQ2. Figure 2 focuses on individual, non-cyclic deviations from stationary behavior, like peaks or deviations featured on individual days.
Emotional fluctuations unveil social denounce, trust and joy. Several deviations from median emotional intensity are found in different time windows and for different emotions. Before the official reopening of 4 May, social discourse registered several fluctuations in terms of fear, anger, surprise and disgust. The morning of 2 May registered a progressive increase in anger co-occurring together with a spike of fear. According to Plutchik's (2003) theory of emotions, the alertness against a threat caused by fear can give rise to anger as a reaction mechanism so that the two emotions are not independent of each other. A closer investigation of the stream of tweets reveals the proliferation of highly retweeted tweets, in the morning of 2 May, mainly about: (i) political denounce of how the Italian government can use EU investments for reopening, expressing alarm about "vultures" preying on the misfortune of others, and (ii) gender gap denounce, criticising why only 20 percent of policy-makers enrolled by the Italian Government were women.
The afternoon of 2 May also registered a decrease in surprise and a spike of disgust. The most frequent words/most retweeted messages on that time window indicated the continuation of negative/criticising political debate together with messages protesting against the security measures for public businesses like restaurants, hairdressers and beauty centres. These negative trends did not impact the average joy measured on the day, which remained fairly constant over time and was expressed in several tweets expressing hope and excitement about the incoming reopening.
The observed decrease in surprise taking place on 2 May corresponded to the resharing of mass media articles, starting early in the morning and explaining the new measures concretely enabling the reopening, with jargon like "regol" (rules), "chiest" (ask), "intervist" (interview) and "espert" (expert). These articles explaining future events about the reopening also contributed to increasing anticipation, i.e., an emotional projection into the future (Plutchik, 2003).
A delayed positive contagion. On 4 May, the day of the lockdown release all over Italy, emotional trends remained fairly constant over time. A drastic drop in negative emotions, co-occurring with a raise in positive ones, was found on 5 May, starting around 10 AM. A closer look at the stream of Twitter messages reveals that this massive change in global emotions was due to tweets of news reporting how the contagion slowed down in the three previous days.
Messages expressing excitement about the reopening ("let's enjoy phase 2!") were the most retweeted ones on 5 May. Interestingly, positive messages included also: (i) desire for travelling, (ii) appreciation for the newfound freedom, and (iii) trustful instructions about how to use self-health sanitary tools, like facemasks, for living together with COVID-19. Hence, the emotional effects of the lockdown release were not observed on the same day of reopening, 4 May, but were rather delayed by one day and enhanced the overall flow of positive emotions on 5 May. Such delayed and drastic alteration in emotional profile provides evidence for a collective emotional contagion, indicating how the reopening was collectively perceived with mostly positive emotions by online users.
---
Peaks of sadness and social distancing.
Emotional trends remained mostly constant in the aftermath of the reopening, with strong fluctuations present on 11 May and 12 May. The sudden spike in sadness and disgust registered in the afternoon of 11 May and early morning of 12 May is related to tweets of complaint. The most retweeted messages in this time window expressed concern and complaint about a lack of clear regulations about social behaviour, exposing critical issues like large crowds assembling in public spaces and a difficulty for restaurants to guarantee social distancing. The most frequent jargon in this time window was "misur" (measure), "distanz" (distance) and "tavolin" (table ). At the same time, the Twitter stream also featured news of local COVID-19 outbreaks. A smaller peak in anger and disgust was featured on 20 May and mostly related to Twitter messages of political denouncement.
In order to better understand the above emotional shifts, in the next section the same Twitter stream is analysed with the valence-arousal circumplex model (see Methods). Results are compared against the above ones obtained with the NRC Emotion Lexicon.
---
RQ2: Were there emotional shifts over time highlighted by some emotion models but neglected by others?
The above emotional fluctuations indicate changes in the global perception of social discourse that were confirmed by a closer look at the Twitter stream, indicating the powerfulness of the NRC lexicon to identify emotional transitions over time. Figure 2 (bottom row) reports the richness in valence and arousal of words as embedded in tweets. Despite the plot range being the same as in Figure 2 (top row), both valence and arousal remained mostly constant over time, hiding the emotional peaks and fluctuations observed with the NRC Emotion Lexicon. Notice that no fluctuations were observed even by manually tuning the smoothing factor of the valence/arousal curve. The only stronger deviation observed with the valence-arousal circumplex model is on 20 May, where the drop in valence and the increase in arousal are compatible with negative/alarming emotions like anger, in agreement with what was found with the NRC Emotion Lexicon.
Reopening was a positive event but no "happy ending". The above results also indicate that the reopening after the lockdown was met with a positive emotional contagion over social media. The deluge of trust and joy, in combination with anticipation, indicate a positive and hopeful perception of restarting after a lockdown. The restart itself was not a happy ending, though. Negative emotions indicated a deluge of complaints and social denouncement about gender disparities, risks of inappropriate behaviour, difficulties to keep up with social distancing and political denouncement.
In order to better understand how social discourse was structured across days, conceptual prominence over time is investigated in the next section.
---
RQ3: Which were the most prominent topics of social discourse around the reopening?
Prominent words combine fear and hopes about restarting. Tables 1A and2A (cf., Appendix) report the most frequent concepts and those words with the highest closeness centrality in FMNs, respectively, as extracted from daily social discourse around #fase2. Words are colored according to the emotion they elicit (see Methods). The negator "not" was consistently ranked first in all cases and was not reported for the sake of visualisation. Notice how on 3 May the most frequent word in social discourse was "doman" (tomorrow), indicating anticipation expressed by online users towards the reopening on 4 May. The concept of "govern" (government, to govern) was highly ranked by both frequency and closeness centrality across all days. This indicates that a substantial fraction of tweets was linked to the governmental indications and measures for the reopening, as identified also in emotional profiling. Jargon related to the COVID-19 pandemic like "cas" (case), "contag" (contagion) and "quaranten" (quarantine) remained highly central across the whole period, indicating that social discourse about the reopening was strongly interconnected with news about the contagion, as also indicated by Gozzi and colleagues (2020). Concepts like "nuov" and "mort" popped high in both frequency and closeness ranks on some days because of the reported news about local COVID-19 outbreaks. Inspirational jargon like "respons" (responsible), "affront" (to face), "entusiasm" (enthusiasm) and "sper" (hopeful) were prominent across the whole time period and with both measures. This quantitative evidence indicates that social discourse was strongly focused about a concretely positive attitude towards a responsible reopening.
Frequency captures more negative jargon. As indicated by emotional profiling, these prominent and positive concepts coexisted with prominent but negative concepts, like "vergogn" (ashamed), "critica" (criticism) and swearing. These concepts were captured mostly by frequency rather than by closeness centrality, indicating the proliferation of negative messages repeating these concepts with fewer contextual richness when compared with positive concepts (which end up being more central in FMNs). An example of this trend is on 14 May, where frequency captures mostly blaming concepts whereas closeness identifies more general topics like "govern", "far" (do) and "misur" (measures). This difference calls for a more systematic comparison of frequency and closeness in identifying word prominence.
---
Closeness captures contextual diversity.
Frequency and closeness correlated positively across the whole period, with a mean Kendall Tau of 0.67 ± 0.04 (p<10 -6 ) averaged over all 20 days. This value indicates that words ranked highly by closeness centrality tended to be ranked highly also by frequency. As an example, a scatter plot of log frequency and closeness centrality of individual words is reported in Figure 1A in the Appendix. The correlation between the two quantities is not perfect (e.g., equal to 1). On the one hand, closeness better captures contextual richness (Stella, et al., 2017), i.e., the numbers of different semantic contexts and frames featuring a concept, an example being meaning modifiers commonly occurring in different contexts like "non" that tend to have high closeness. On the other hand, high frequency but lower closeness identifies words with very narrow semantic frames, appearing always within the same context and bearing the same meaning, e.g., "shock" and "disordine" (disorder). Combining closeness and frequency can therefore highlight more nuances of the meanings attributed to words through a complex network approach. While frequency outlines that concepts like "govern" (government/to govern), "cas" (case) and "quaranten" (quarantine) remained highly ranked between 1 May and 20 May, closeness identified a different dynamics for "quaranten". In the first half of May, "quarantine" became monotonously less central in the forma mentis networks of daily social discourse, registering a decrease of almost 200 positions in its ranks. This difference indicates that quarantine kept being a frequent concept in social discourse but it appeared in fewer and fewer contexts, gradually becoming more peripheral in the discussion. This decrease reached a halt and an inversion of tendency on 15 May, after which "quaranten" acquired a higher closeness. Investigating the Twitter stream reveals that the increase in rank of quarantine registered after 15 May is due to many tweets reporting the decision of the Italian government to accept tourists with no obligation for self-quarantine.
---
Closeness highlights dynamics invisible to
Figure 3: Closeness and frequency ranks over time of "govern" (magenta), "case" (pink) and "quarantine" (green) and other prominent concepts on 4 May.
---
RQ4: How did online users express their emotions about specific topics in social discourse?
The previous sections characterised social discourse across days. Instead, this section explores how online users described specific concepts on a single day.
---
What preoccupied online users on the vigil of reopening?
The emotional profiles and semantic frame/FMN neighbourhood around "worried" ("preoccup") as extracted from the stream are reported in Figure 4. When talking about their preoccupations about #fase2, Italians displayed different emotional profiles between 3 May and 4 May. The day before the reopening, trust, anger and fear coexisted together (see also Figure 2A in the Appendix). The semantic content of the FMN contains information about the main concepts eliciting these emotions. Negative emotions mainly targeted/concentrated around the difficulties of reopening ("difficulties", "complain", "fear"), which were projected to/linked with "tomorrow". On the day of the reopening, 4 May, the anger of the previous day vanished and more hopeful words appeared (e.g., "success" and "hope"). On 4 May, preoccupation was linked to the institutions, featuring fear and sadness for their "absence". The links involving "not" contrasted the negative meaning of "worried" with positive, rather than negative, associations like "opportunity", "alive" and "respect". The links between "worried", "commerce" and "plight" also indicate that even on the day of the reopening, social media expressed concern about the economic repercussions of the lockdown for commerce. Jargon related to the contagion ("coma", "case", "contagious") indicates a concern about the COVID-19 contagion present even on the day of the reopening. Ending the quarantine was not a "happy ending". As reported above, the concept of the "quarantine" ("quarantin") became less and less prominent in social discourse in terms of closeness centrality, i.e., it became peripheral in the flow of social discourse by being presented in fewer and fewer different contexts. Did its emotional aura undergo also some transformation? Figure 5 compares the semantic-emotional frames of "quaranten" (quarantine) on 1 May (top) and 6 May (bottom). Before the reopening, social discourse around quarantine elicited trustful associations of anticipation towards the future, involving the government and celebrating the success of the quarantine in slowing down the contagion ("success", "volunteer", "gorgeous"). Traces of social denounce were present too, with links towards anger-related jargon like "ashamed", "damage" and "rebel". However, the registered emotional richness of anger around "quaranten" on 1 May was compatible with random expectation (see also Figure 3A in the Appendix). This positive perception of the quarantine did not last. Two days after the reopening, the threat of new cases of contagion was prominently featured in social discourse around the quarantine, as captured by the triad with "newcomer" and "contagion" and also by other negative associates like "isolated", "death", "coffin" and "long"-"forgotten". In a few days, positive emotions around the quarantine dissipated and were replaced by sadness. A closer check at the stream of tweets reveals that this flicker of sadness originated in news media announcements reporting local outbreaks of COVID-19. Reopening the country with the COVID-19 still circulating among the population disrupted the positive "happy ending" perception of the (end of the) quarantine. An unwavering yet nuanced trust in politics. Different emotions can coexist not only in the global social discourse but also around specific concepts. An example is "politics" ("polit"), which consistently featured a trust in its semantic/emotional frame higher than random expectation consistently between 1 May and 20 May (z-scores higher than 1.96). As reported in Figure 6, trust in politics on 2 May was mainly focused around the government ("govern"), its crew of experts ("expert") and its strategies for containing the contagion and countering the economic repercussions of the lockdown (see links with "launch", "plan" and "economy"). Although persisting over time, trust co-existed also with other emotions surrounding "politics" (see also Figure 4A in the Appendix). For instance, on 11 May, "politics" featured several associations with anger-eliciting words, like "dictatorship", "garbage", "controversial" and "ashamed", all concepts expressing political denouncement against controversial political measures of the lockdown.
The FMN on 11 May also reports the source of this anger, as reported in those tweets registered during that time window, politics was considered to be "responsible" for and expected to find "money" for preventing small businesses from going "bankrupt". This burst of anger (with z-score > 1.96, cf., the emotional flower in Figure 6, bottom) is another example of flickering emotion. In this case, notice that anger and sadness co-existed with trust, indicating a persistent perception of trust in politics in online users' discussions. It has to be underlined that, as reported in the emotional flower in Figure 1, also "govern" (to govern/government) featured a trustful emotional aura.
---
RQ5: Were messages expressing different emotions reshared in different ways?
Previous studies already established that valence can influence the extent to which tweets are re-shared by online users (Ferrara and Yang, 2015;Brady, et al., 2017). In particular, Ferrara and Yang (2015) found a positive bias on Twitter, i.e., a tendency for users to share messages with a positive sentiment/valence.
This section aims at testing whether differences in tweet sharing hold also beyond valence and across the whole spectrum of different emotions.
Considering the emotions of moderately and highly retweeted messages. Attention was given to the most retweeted messages and their emotional content. Distributing tweets according to their retweet count, focus was given to the top 98.5 percent percentile, which included 5,942 tweets with a median of 205 retweets, a minimum of 100 re-shares and a maximum of 2,822 retweets. Tweets above the median of 205 retweets were considered as highly retweeted (HR). Tweets below the median of 205 re-shares were considered as moderately retweeted (MR). Using the NRC lexicon, the emotional profile of each single HR and MR tweet was computed. For every emotion, the two distributions of emotional richness resulting from HR and MR tweets were compared.
With a statistical significance of 0.05, highly retweeted messages about the Italian reopening exhibited:
1. A lower emotional richness in anger than moderately retweeted messages (mean HR: 0.0452, mean MR: 0.0488, Mann-Whitney stat. 2.09 • 10 6 , p = 0.0124); 2. A higher emotional richness in fear than MR messages (mean HR: 0.0874, mean MR: 0.0925, Mann-Whitney stat. 1.92 • 10 6 , p = 0.0225); 3. A higher emotional richness in joy than moderately retweeted messages (mean HR: 0.0874, mean MR: 0.0925, Mann-Whitney stat. 1.91 • 10 6 , p = 0.0068).
For all the other emotions, namely disgust, sadness, anticipation, surprise and trust, no statistically significant difference was found between highly and moderately retweeted messages.
Fear subverted the positive bias of resharing. In the current social discourse, tweets shared significantly more by online users elicited more joy, a higher fear and less anger. These results provide evidence confirming and extending the previous positive bias identified only over sentiment by Ferrara and Yang (2015). According to the circumplex model (Posner, et al., 2005), joy is an emotion depending on positive sentiment whereas fear and anger live in the space of negative sentiment. Finding that people tended to reshare more tweets with higher joy and lower anger represents additional confirmation of the positive bias that people tend to re-share content richer in positive sentiment. However, this tendency does not hold across the whole spectrum of emotions. Fear subverted the positive bias: online users tended to re-share messages richer in fear, and thus in negative sentiment.
---
Discussion
The main take-home message of this investigation is that the post-COVID-19 reopening in Italy was not a "happy ending", since social discourse highlighted a variety of semantic frames, centered around several issues of the restart and mixing both positive and negative emotions.
This rich semantic/emotional landscape emerges as the main novelty of this approach, which transparently links together emotions (rather than more simplistic sentiment patterns) with the specific semantic frames evoking them and extracted from the language of social discourse. This extraction relies on a fundamental assumption: text production and therefore social media both open a window into people's minds (Aitchison, 2012;Ferrara and Yang, 2015). Under a time of crisis, like during a pandemic, being capable of seeing through such window is fundamental for understanding how large audiences are coping with the emergency (Bonaccorsi, et al., 2020;Gallotti, et al., 2020;Gozzi, et al., 2020). This challenge requires tools that provide a transparent representation of knowledge and emotions as expressed in social discourse. This work used computational cognitive science for seeing through the window of people's minds with the semantic/emotional analysis of tweets (Stella, et al., 2020), without explicitly relying on machine learning. The analysis performed here on social discourse in 400k Italian tweets, including #fase2 (phase 2) and produced between 1 May to 20 May 2020, provides several important points for discussion.
The reopening was not a happy ending. As outlined within RQs 1, 3 and 4, emotional profiling provided evidence for a positive emotional contagion happening online after the day of the restart, with levels of trust, joy, happiness and anticipation all simultaneously higher than previously registered. This positive emotional contagion did not last and it did not feature the complete disappearance of negative emotions, like fear or anger, which rather co-existed with the others in social discourse. The coexistence of different types of emotional trends was found also in previous works about COVID-19 (Stella, et al., 2020) and it is not surprising, given the unprecedented range of socio-economic repercussions that the pandemic brought not only over the health system but also over social mobility and the economy (cf., Bonaccorsi, et al., 2020;Pepe, et al., 2020). What is more interesting is that such constellation of different positive, negative and neutral emotions cannot be focused only on the concept of "reopening" but it rather has to be distributed or scattered across circulating news or key topics of social discourse. This scattering creates a methodological challenge for understanding the targets and actors of these emotions.
News flows and politicians were found to be relevant in driving emotions like disgust (see RQ1), which were invisible to sentiment analysis (see RQ2). Enhancing standard frequency-based lexical analysis with closeness (RQ3) highlighted a plurality of key concepts, brought by news and users' messages, being discussed in different ways across days. The semantic frames reconstructing how online users perceived such prominent concepts revealed a set of flickering emotions (RQ4), which were assessed in detail thanks to forma mentis networks. Notice that this approach gave focus to the cognitive structure of the language used by online users and not to their identity. The flickering emotions/conceptual prominence reported here might be the effect of a "topic drift" promoted by a handful of influential users, who brought attention on specific aspects of the reopening by launching additional hashtags or by simply targeting specific users while creating flaming content or trolling. The latter scenario has been frequently unearthed in previous studies focusing on the Twittersphere (Zelenkauskaite and Niezgoda, 2017;Bessi and Ferrara, 2016;Stella, et al., 2018;Ferrara, 2020), which showed how trolling and social bots might be capable of depicting political climate in ways rich of negative sentiment and anger-related emotions. The specific identification of the exact actors enabling emotional contagion and topic drift represents a very interesting research direction for future work.
The limits of valence/arousal in social discourse analysis. On a methodological side, the results in RQs 1 and 2 indicate that the NRC Emotion Lexicon (Mohammad and Turney, 2013) is considerably more powerful than the circumplex model in detecting spikes and shifts in social discourse. This difference can be explained with the observation that social discourse is different from a single text or a book. In social media multiple individuals can participate in a conversation, often reporting different angles, perspectives or stances about the same topic. Hence, whereas in a book a single author usually reports a stance with a predominant emotional tone (Berman, et al., 2002), in social discourse multiple tones can co-exist together (Kalimeri, et al., 2019) and they could average out when considering valence/arousal. For instance, anger in the circumplex model corresponds to high arousal (excitement) and negative valence (negativity) whereas trust corresponds to low arousal (calmness) and positive valence (positivity).
The coexistence of anger and trust, as found in the current dataset with the NRC lexicon, would average out the opposing contributions of angry/trusting messages.
Hence, the current results provide strong evidence for the necessity of adopting emotion specific tools for the analysis of social discourse beyond valence/sentiment. While extremely useful in single-author texts, the valence-arousal circumplex model of emotions might not be suitable for the investigation of highly nuanced emotional profiles in social discourse, where multiple positive or negative emotions might co-exist together. Exploring the eight basic emotional dimensions of Twitter discourse, in terms of fear, anger, disgust, anticipation, joy, surprise, trust and sadness (Mohammad and Turney, 2013), highlighted spikes in social and political denounce of gender and economic inequality or outbreaks of news media announcements about the COVID-10 pandemic. These phenomena went unnoticed when considering the valence and arousal of social discourse (Posner, et al., 2005), underlining the necessity to move from general sentiment/arousal intensity approaches to more comprehensive emotional profiling investigations of social discourse.
Cognitive networks and stance detection. This whole study revolves around giving structure to social discourse through complex networks. This procedure enabled a quantitative understanding of people's perceptions and stances toward various aspects of the nationwide reopening. To this aim, textual forma mentis networks were used, reconstructing syntactic, semantic and emotional associations between concepts as embedded in text by individuals (Stella, et al., 2019;Stella and Zaytseva, 2020;Stella, 2020). As explored within RQ3, closeness centrality in networks built from social discourse on each day consistently identified as central positive concepts, related to the government, the willingness of restarting and the necessity of establishing measures for rebooting economy and social places, but also negative words, related to attention about the contagion and new cases. Word frequency captured analogous prominent concepts but also tended to highlight more negative words, expressing political and social denounce. Closeness, based on conceptual distance, and frequency, based on word counts, did not perfectly correlate with each other and even offered different information about how conceptual prominence evolved over time. An example is "quarantine", which became progressively used in fewer and fewer different contexts, mainly related to local COVID-19 outbreaks and the decreasing epidemic curve, while remaining consistently highly frequent in discourse over time.
Indeed, frequency neglects the structure of language used for communicating ideas and emotions, so that it is expected for frequency to provide different results when compared to closeness. Consider the simple example of a collection of 100 tweets, with 80 of them being the repetition of "I hate coronavirus" and the remaining 20 linking "coronavirus" with medical jargon in different ways (e.g., "One of the symptoms of the novel coronavirus affliction is cough", "The novel coronavirus is a pathogen originated in animals and transmitted to man", etc.). Keeping into account only frequency would identify social discourse as dominated by "hate" and "coronavirus" but it would miss the constellation of less frequent words giving meaning and characterising "coronavirus" itself through medical links and contextual associations, which are rather captured by closeness itself. The empirical and methodological aspects outlined above underline the necessity of considering not only frequency but also other structural measures of language, like network closeness, in order to better assess opinion dynamics and online public perceptions over social media.
Flickering emotions surrounded specific facets of the reopening. As reported within RQ4, forma mentis networks also highlighted how people's perceptions of specific aspects of the reopening changed over time. On the day of lockdown release, the main preoccupations of Italians focused about the economy but were strongly contrasted with hopeful messages, semantically framing the reopening as a fresh new start for getting back to normality. Hope did not embrace all aspects of the reopening. Announcements of local COVID-19 outbreaks altered the perception of "quaranten" (quarantine), which was previously perceived as successful in reducing contagion. Trust vanished and was replaced with a sad perception of quarantine, related to sudden, local outbreaks (see also Gozzi, et al., 2020). Italians displayed also an unwavering trust in politics and the government across the whole considered time window. Notice that a persistent trust in politics and governments can be beneficial in guiding a whole nation towards successfully reopening after a nationwide lockdown (Massaro, 2020;Lima, et al., 2020). However, trust co-existed with other negative emotions on some days, indicating a nuanced perception portrayed in social discourse, combining trust in the institutions with political denounce, anger and sadness about delays or lack of clarity. The microscopic patterns observed in RQ4 indicate that the general analysis of social discourse in terms of emotional profiling is not enough for understanding the complex landscape of public perception in social media. A complex network approach, structuring concepts and emotions around specific events, represents a promising direction for future research understanding social media perception and dynamics (see also Arruda, et al., 2019;Brito, et al., 2020;Stella, 2020).
Resharing behaviours and fear. As reported in RQ5, this study also investigated user behaviour in sharing content with certain emotional profiles. Tweets richer in joyful concepts were found to be more frequently re-shared to users, while the opposite was registered for anger. Retweeting more joyful and less angry tweets is compatible with the positive bias for users to retweet tweets with positive sentiment found by Ferrara and Yang (2015). However, this bias was subverted by fear, as in the current analysis online users were found to re-share more those messages richer in negative, fearful jargon. This tendency might be due to the strong affinity of the considered tweets with the COVID-19 contagion, a phenomenon met with fear and panic over social media (Stella, et al., 2020;Lima, et al., 2020). The observed pattern might therefore be a symptom of panic-induced information spreading (Scherer and Ekman, 2014). These distinct behavioural patterns mark a sharp contrast in the way that different emotions work over social media. Future research should investigate not only the amount of retweets but also the depth of content spreading in order to better understand how different emotions pervaded the Twittersphere.
---
Limitations of this study
The current analysis presents mainly four limitations which are accounted for and discussed in view of potential future research work.
---
Accounting for cross-linguistic variations in emotions.
This study investigated tweets in Italian by considering cognitive datasets, like the NRC Emotion Lexicon and the VAD Lexicon, which were not built specifically from native Italian speakers. In fact, these datasets were obtained in mega-studies with English speakers and then translated across different languages (cf., Mohammad and Turney, 2013;Mohammad, 2018). From a psycholinguistic perspective, translation might not account for cross-linguistic differences in the ways specific concepts are perceived and rated (Aitchison, 2012). In absence of large-scale resources mapping words to emotions directly from Italian native speakers, the above translations represent a valuable alternative, successfully adopted also in other studies (Stella, et al., 2020). With the advent of Mechanical Turk and other platforms for realising psycholinguistic mega-studies, future research should be devoted in order to obtain emotional lexica specifically tailored for Italian and other languages different from English.
Focusing over user replies. The considered dataset mapped only tweets incorporating the #fase2 (phase 2) hashtag and did not consider user replies without that hashtag. This limitation means that the social discourse investigated here was mostly generated by individual users and was not the outcome of a user reply. As a consequence, by construction, the considered dataset is more focused on reporting the plurality of individual perceptions about the reopening, without considering trolling or debates spawning from post flaming (Stella, et al., 2020) or from malicious social bots (Ferrara, 2020). Notice also that the dataset included retweets and user mentions, which contributed to discussions between users. Future studies might focus more on the conceptual/emotional profiling of users storylines and discussions.
Combining networks and natural language processing. From a language processing perspective, this study focused on extracting the network structure of syntactic and semantic relationships, it included word negation and reported also meaning modifiers. However, the current analysis did not amplify or reduce emotional richness according to other features of language like punctuation or adverbs (e.g., distinguishing between "molto gioioso"/very happy and "gioioso"/happy), like it was done in other studies (Stella, et al., 2018). Despite this lack of fine structure, the emotional profiles built and analysed here were still capable of highlighting events in the stream of tweets like the proliferation of messages about social/political denounce or strong fluctuations in the perception of specific aspects of the reopening as promoted by news media, e.g., local outbreaks of COVID-19 cases or quarantine-less tourism. More advanced methodologies combining the network approach and natural language processing (cf., Vankrunkelsven, et al., 2018) would constitute an exciting development for a more nuanced understanding of emotions in social discourse.
Profiling the emotions of different discourse dimensions. Another limitation of the study is that it does not explicitly relate emotions and concepts to specific aspects of social discourse like knowledge transmission, conflict expression or support. The coexistence of hopeful, angry and fearful patterns highlighted in this study indicate an overlap of these different dimensions of conversation in social discourse. A promising approach for uncovering these dimensions and identifying the emotions at work behind conflict, knowledge sharing and support for the reopening would be the application of recent approaches to text analysis relying on deep learning (cf., Choi, et al., 2020).
---
Conclusions
This study reconstructed a richly nuanced perception of the reopening after national lockdown in Italy. The Italian Twittersphere was dominated by positive emotions like joy and trust on the day after the lockdown release (RQ1), in an emotional contagion dominated by hopeful concepts about restarting. It was not a happy ending (RQ3). Emotions like anger, fear and sadness persisted and targeted different aspects of the reopening, like sudden raises in the contagion curve, economic repercussions and political denouncement, even fluctuating from day to day (RQ4). User's behaviour in content sharing was found to promote the diffusion of messages featuring stronger joy and lower anger but also expressing more fearful ideas (RQ5).
This complex picture was obtained by giving structure to language with cognitive network science (Stella, 2020) and emotional datasets (Mohammad and Turney, 2013). Whereas the valence/arousal model of emotions was unable to detect emotional shifts (Mohammad, 2018), the NRC Emotion Lexicon and its eight emotional states coloured a richly detailed landscape of global and microscopic networked stances and perceptions (RQ2).
Reconstructing and investigating the conceptual and emotional dimensions of social discourse is key to understanding how people live through times of transitioning. This work opens a simple quantitative way for accessing and exploring these dimensions, ultimately giving a structured, coherent voice to online users perceptions. Listening to this voice represents a valuable cornerstone for future participatory policy-making, using social media knowledge as a valid tool for facing difficult times.
---
About the author
Massimo Stella is a lecturer in computer science at the University of Exeter and a scientific consultant and founder of Complex Science Consulting. His research interests include cognitive network science and knowledge extraction models for understanding cognition, language and emotions. He published 33 peer-reviewed papers and has a Ph.D. in complex systems simulation from the University of Southampton (U.K.). E-mail: massimo [dot] stella [at] inbox [dot] com
---
This Appendix gathers tables and supporting information of relevance for the results reported in the main text.
---
Table 1a:
Daily top-ranked concepts according to frequency. Higher frequency indicates higher occurrence in tweets.
Words eliciting negative (positive) emotions are in warm (cold) colors (see also Figure 1 in the main text).
---
Table 2a:
Daily top-ranked concepts according to frequency. Higher frequency indicates higher richness of different contexts in tweets. Words eliciting negative (positive) emotions are in warm (cold) colors (see also Figure 1 in the main text). | 69,382 | 1,347 |
e1ebd88f190cd93ee12a93cf1934036542760308 | Community violence and internalizing mental health symptoms in adolescents: A systematic review. | 2,022 | [
"JournalArticle",
"Review"
] | Purposes: Mental disorders are responsible for 16% of the global burden of disease in adolescents. This review focuses on one contextual factor called community violence that can contribute to the development of mental disorders Objective: To evaluate the impact of community violence on internalizing mental health symptoms in adolescents, to investigate whether different proximity to community violence (witness or victim) is associated with different risks and to identify whether gender, age, and race moderate this association. Methods: systematic review of observational studies. The population includes adolescents (10-24 years), exposition involves individuals exposed to community violence and outcomes consist of internalizing mental health symptoms. Selection, extraction and quality assessment were performed independently by two researchers. Results: A total of 2987 works were identified; after selection and extraction, 42 works remained. Higher exposure to community violence was positively associated with internalizing mental health symptoms. Being a witnessing is less harmful for mental health than being a victim. Age and race did not appear in the results as modifiers, but male gender and family support appear to be protective factors in some studies.This review confirms the positive relationship between community violence and internalizing mental health symptoms in adolescents and provides relevant information that can direct public efforts to build policies in the prevention of both problems. | Background
Mental disorders account for 16% of the global disease burden in adolescents. The onset of half of all cases of mental disorders occurs by the age of 14 years, and the onset of 75% of all cases occurs by the mid-20s [1]. Adolescence is a moment of considerable physical, psychological, cognitive, and sociocultural changes and an expected period of crisis [2]. The natural transition from childhood to adult life could mask some mental health symptoms. Most mental disorders go undetected, dragging their consequences to adulthood and causing functional impairment [1].
Mental health problems can be divided into externalizing and internalizing behaviour problems [3]. The first group is characterized by behaviours that target the environment and others. In internalizing problems, behaviours target the individual, including common mental disorders and post-traumatic stress disorder.
Common mental disorders correspond to a group of symptoms, including anxiety, depression, and somatic complaints, but not necessarily a pathology; common mental disorders are highly prevalent [4]. A systematic review estimated the prevalence of past-year and lifelong common mental disorders worldwide as 17.6% and 29.2%, respectively [5]. A study conducted in Brazil with adolescents showed a prevalence of common mental disorders of 30.0% [6]. Post-traumatic stress disorder is also a significant health condition that affects children and adolescents. It consists of the presence of intrusive thoughts relating to a traumatic event, avoidance of reminders of the trauma, hyperarousal symptoms, and negative alterations in cognitions and mood [7]. A metaanalysis showed that the overall rate of post-traumatic stress disorder in this group was 15.9% (95% CI 11.5-21.5) [8]. Another meta-analysis that focused on delayed post-traumatic stress disorder found that the proportion of post-traumatic stress disorder cases with delayed posttraumatic stress disorder was 24.8% (95% CI = 22.6% to 27.2%) [9].
Understanding the determinants of mental disorders is not an easy task, since these disorders are considered multifactorial phenomena. The literature has pointed out that genetic characteristics, the history of child development, and contextual factors are the main drivers of the development of mental illness among adolescents [10]. Among contextual factors, those considered the most important are low socioeconomic level, family conflicts and victimization of different forms of violence [11]. Adolescents can be especially vulnerable to community violence and its consequences [12]. At this stage, youths' circulation outside the home and without their families will increase [13]. Inexperience, emotional immaturity and the need to test limits, combined with this increase in community space circulation, could lead to exposure to violence and maximize its mental health effects. The increase in community violence in recent years is a global problem, and such violence is most frequent in low-and medium-income countries [14]. This review will focus on one contextual factor influencing mental disorders in adolescence: community violence [15,16]. Community violence is a type of interpersonal violence that occurs among individuals outside of personal relationships. It includes acts that occur in the streets or within institutions (schools and workplaces) [17]. In addition, community violence can be experienced directly (victimization) or indirectly (witnesses and hearing about).
Estimating the impact of exposure to community violence on adolescents' mental health has been at the core of a large body of research. Two previous meta-analyses showed a mild to moderate and positive effect of community violence on adolescents' mental health [18,19]. However, these associations need to be confirmed since many primary studies were published after 2009. Additionally, there are still significant gaps to be addressed. For instance, it is not clear whether different degrees of proximity to community violence (victimization, witnessing, or hearing about) influence mental health outcomes (depression, anxiety, and post-traumatic stress disorder) at different magnitudes. Moreover, it is not clear whether gender, race and age can moderate this relationship, as well as other factors such as family constitution and interpersonal relations. This review's main objective is to systematize the scientific literature that has estimated the impact of community violence on adolescents' mental health. Other goals are (i) to investigate whether different proximity to community violence is associated with different magnitudes of common mental disorders or post-traumatic stress disorder and (ii) to identify whether gender, age, and race moderate the association between community violence and internalizing symptoms.
---
Methodology
All methods were carried out in accordance with the Preferred Reporting Items for Systematic Reviews and Metaanalysis Protocols (PRISMA-P) 2015 checklist and Joanna Briggs Institute Reviewers Manual-Chapter 7: Systematic reviews of aetiology and risk [20,21]. The protocol is registered in the International Prospective Register of Systematic Reviews (PROSPERO) -CRD 42019124740.
The review question was: ' Are adolescents exposed to higher levels of community violence at higher risk of developing internalizing mental health symptoms?'
---
Eligibility criteria Population
Following the World Health Organization (WHO) classification for adolescence, studies were selected if adolescents in the sample were aged 10 to 24 years. To be included in the review, adolescents participating in the studies needed to be in this age group at the time of outcome measurement [22]. There were no exclusion criteria.
---
Exposure of interest
Our exposure of interest is community violence. Community violence events that occurred inside institutions, such as schools and workplaces, and events of sexual nature, such as rape or other types of sexual aggression, were excluded. This choice was based on the fact that these types of community violence have different effects and magnitudes on adolescent mental health [23][24][25][26].
The inclusion criteria were original studies measuring community violence through questionnaires (answered by adolescents, parents, relatives or professionals responsible for the child and teachers) or crime rates. The exclusion criteria were original studies that included other types of violence, such as domestic violence, bullying, or sexual violence, that could not be separated from community violence. Comparison groups included adolescents not exposed or exposed to community violence at a lower level. There were no exclusion criteria for comparison.
---
Outcomes/dependent variables
This review considered studies that included internalizing symptoms as the primary outcome, represented by post-traumatic stress disorder, common mental disorder symptoms, depression, and anxiety. As inclusion criteria, studies that measured mental health symptoms through a questionnaire with the adolescents themselves, their parents, teachers, or professionals related to them and that had an association measure for the outcome were used. Exclusion criteria were applied for studies with association measures from regression models without adjustment.
---
Study design
This review included the following study designs: longitudinal, cross-sectional, and case-control. Case reports, case series, reviews, qualitative methodologies, interventions, descriptive studies, and methodologic studies were excluded.
---
Information sources
The search was performed in six allied health research databases: Medline (accessed through PubMed), Psy-cINFO, Embase, LILACS (Literatura Latino-americana e do Caribe em Ciências da Saúde), Web of Science, and Scopus. Regarding grey literature, only those corresponding to theses and dissertations were included. These were identified in the databases above, and "ProQuest Dissertation and Theses" was used to search for full texts. The search was conducted on February 5 th , 2019, and updated in January 14 th , 2021 and no filters for years of publication or language were applied. After the third phase of selection, all studies included in the review had their reference lists analysed by two independent researchers to search for additional works.
---
Search strategy
---
Search terms
were based on the review question and were constructed with a librarian (APPENDIX I). The main concepts were as follows: "adolescents" OR "youth" OR "teenagers" AND "community violence" OR "urban violence" OR "neighborhood violence" AND "mental health" OR "anxiety" OR "depression" OR "post-traumatic" OR "internalizing" OR "psychological symptoms". A librarian worked on obtaining the full-text works, seeking bibliographic bases, libraries, and contact authors.
---
Study selection
Data selection was carried out in three stages: title, abstract and full texts. During all phases, two researchers performed critical readings to apply the pre-established inclusion and exclusion criteria. All stages were preceded by a pilot that included 10% of the total number of works in each phase (concordance rate 80-97%). In the first and second stages of selection, any disagreements were included. At the second stage of selection, we decided to exclude externalizing outcomes. In the third stage, we discussed all the discrepancies. When there were discrepancies, a third researcher was called. All reasons for exclusion were registered.
The authors of five studies were contacted for clarification. The corresponding authors of each work in which queries arose during the selection phase were contacted by e-mail. In cases where we did not receive a response, a new e-mail was sent 15 days later. The queries referred to the presence of questions about sexual and school violence in the violence questionnaires and a lack of reported confidence intervals (CIs) in the studies.
---
Data extraction
Data were extracted using EpiData 3.1 with a standardized formulary tested in the pilot. Extracted information included the following: (i) study design, setting, times of measures and recruitment; (ii) demographic population; (iii) exposure characteristics -classification subtypes and measurement instrument; (iv) comparison group; (v) outcomes -types and measurement instruments; and (vi) association measures. Again, two review researchers worked independently. All papers included at this phase were discussed. In two studies, a third researcher was consulted to decide about discrepancies. At the end of the extraction phase, 42 studies were divided into two groups: 21 studies with complete information that were included in the meta-analysis and 21 studies with incomplete information included only in the qualitative synthesis.
---
Assessment of methodological quality
The quality of the studies was also evaluated independently by two researchers. The formulas used were adaptations, also tested in the pilot phase, from a predefined quality assessment form for cohort/case-control studies and descriptive studies published in the Joanna Briggs Institute Reviewers' Manual [27]. Studies were classified into three categories: low, intermediate, and high quality. Researchers defined cut-off points; all questions had the same weight in the final punctuation. Discrepancies were discussed, and a consensus was achieved in all cases. Critical appraisal tools are presented in APPENDIX IV.
---
Synthesis of the results
Results are presented in qualitative synthesis. A subgroup of 21 studies underwent quantitative synthesis. Forest plots were displayed to visualize the results. Heterogeneity was evaluated by the I 2 test, which describes the proportion of variation across the studies due not to chance but rather to heterogeneity [28,29]. The higher the percentage, the higher the level of heterogeneity. Because heterogeneity was still high when adopting the random effect model, reasons for these were investigated, and subgroup analysis was conducted -stratification by proximity to community violence (witness and victim) and types of outcomes (post-traumatic stress disorder, depression and internalizing symptoms) were performed.
Because heterogeneity was still high in almost all forest plots, it was not possible to construct funnel plots to evaluate possible publication bias. We report our findings in accordance with PRISMA guidelines [30].
---
Results
After a search in databases, 2987 works were identified, and no additional papers were found through other sources. Of these quantities, 1005 duplicates were removed, and the selection phase started with 1982 records. During stages 1 and 2 of selection, 1.119 records were excluded, leaving 863 for the third phase. The eligibility phase started with 42 works. Of these, 21 were included in the quantitative synthesis. Details are presented in Fig. 1.
The results are presented in the following manner: 42 studies included in qualitative syntheses had their main characteristics presented in Table 1, and their results were described according to the review objectives. Quality assessments are presented in Table 2 and3 A subgroup of 21 studies could be meta-analysed. The first forest plots were generated and included all 21 studies. For these, we worked with the concept of general community violence and only one type of outcome, so for the studies that had more than one association measure (for victim and witnessing, for example), a weighted average was calculated, and the same was done for the studies that had more than one outcome. The I 2 value was 53.8%, with a p value of 0.003, thus indicating substantial heterogeneity [72]. Subgroup analysis was conducted with stratification by proximity of community violence (CV) (witness and victim) and then with types of outcomes (post-traumatic stress disorder, depression and internalizing symptoms). The only graphics presented were those with heterogeneity smaller than 60%, which corresponds to the subgroups of post-traumatic stress disorder and internalizing symptoms as outcomes.
The results of the summary measures must be interpreted with caution. Only some of the qualitative synthesis studies presented complete data that would allow inclusion in the quantitative synthesis. The first graph generated (Fig. 2) shows high heterogeneity, and the graphs presented for the outcomes of post-traumatic stress disorder (Fig. 3) and internalizing symptoms (Fig. 4) do not show high heterogeneity but represent a small group of studies compared to the total number included in the review. Nevertheless, it was possible to see a small but statistically significant greater effect for post-traumatic stress disorder than internalizing symptoms.
---
Table 3 Quality assessment of the included cross-sectional studies
Answers: Y -Yes, N -No, U -Undefined. Score: N (1 point); U (2 points); Y (3 points). The studies were ordered according to their quality. Light grey colour -low quality; medium grey -intermediate quality; dark grey -high quality. The work by Grinshteyn et al. (2018) was evaluated as a longitudinal study because of its study design, but the results presented in Table 1 are classified as cross-sectional because they were statistically analysed using the procedures for cross-sectional studies.
---
Study
Randomized sample
---
Sample definition
---
Confounders Comparable groups
---
Losses Outcome measurable
---
Statistical analysis
---
Exposure measurable
---
Score
---
Mental health symptoms and exposure to community violence
Twenty-eight studies did not consider different degrees of proximity to violence in their analysis [26, 31-43, 45-50, 53, 55, 56, 58, 61, 62, 64, 65, 69, 71, 73]. Of these, twenty-three found a significant association between exposure and outcome (Table 1). Five studies did not find community violence to be a risk factor for internalizing mental health symptoms [26,34,43,53]. Le Blanc et al. [26] justified the lack of association between community violence and outcomes analysed by the fact that other types of violence (home and school) were considered in the statistical analysis and could have influenced the results for a null association. Farrel et al. [34] discussed their results in light of the desensitization hypothesis since the sample has a high prevalence of community violence [74][75][76]. Goldman-Mellor et al. [53] compared their sample perception of violence and objectively measured neighbourhood violence derived from criminal statistics. Perception of violence in the neighbourhood is a different concept than exposure to community violence because the first is related to how adolescents see the environment Fig. 2 Forest plot of studies with general community violence as exposure and any type of internalizing mental disorders as outcomes in which they live. The authors found that adolescents who perceived their neighbourhood unsafe had a nearly 2.5-fold greater risk of psychological distress than those who believed their neighbourhood was safe. Adolescents who live in areas objectively characterized by high levels of violent crime measured by criminal statistics were no more likely to be distressed than their peers in safer areas.
Aisenberg et al. [43] also did not find an association between community violence and PTSD, and they suggested that other factors, such as one's relationship to the victim and one's physical proximity to the violent event, may influence this association. It is important to underscore that this is the only study included in this review considered low quality. Donenberg et al. [50] did not find an association between community violence and internalizing problems; specifically for externalizing problems in boys, some factors that could have influenced these results are a small sample and the fact that the measurement of community violence considered only witnesses.
The subgroup of 20 studies that were meta-analysed had a summary measure of 1.02 (95% CI 1.01-1.02), showing that there is a small but statistically significant higher risk of internalizing mental health symptoms for adolescents exposed to CV.
---
Differences in mental health according to the proximity of CV -victims of CV vs. witnesses of CV
Fourteen studies considered proximity to community violence in the statistical analysis [32,40,44,51,52,54,57,59,60,63,[66][67][68]70]. Three of these studies found a gradient risk for mental health outcomes regarding proximity to community violence, which means a larger risk for victims compared to witnessing and/or witnessing compared to merely knowing of violent events [60][61][62][63][64][65][66][67][68]. Six works found an association for victims of community violence but not for witnesses of community violence, and one found a positive association for different forms of victimization using witnessing as a control [23,32,44,52,57,59,70]. One study found an association between all community violence measures and mental health Fig. 3 Forest plot of subgroups of studies that considered post-traumatic stress disorder as an outcome outcomes with the same magnitude, and three did not find an association either for the victim or for witnessing [32-44, 70, 71, 73].
The results indicate that higher proximity to violence was related to a higher risk for internalizing mental health symptoms. Grinshteyn et al. [54], in addition to a gradient of risk from victims to witnesses to those who merely knew about events, also found differences between violent events and non-violent events, the first one counting for a higher magnitude. The authors that did not find an association discuss the possibility of desensitization and other types of violence (school or family violence) as softening the effects of community violence on mental health [59].
The meta-analysis graphs with victim and witnessing subgroups were not considered because they presented high heterogeneity (61.1% and 67.6%, respectively).
---
Assessment of community violence by crime statistics
Six studies measured exposure to community violence with crime rates [31,35,48,53,54,71]. Grinshteyn et al. [54] defined crime rates using the crime rate per 1000 people in a given postal code. They also collected selfreport data for comparison. Their results pointed to a decreasing gradient risk from victims to witnesses to those who merely knew of violent events. When comparing criminal statistics with self-report measures, the results were positively significant only for depression and at a smaller magnitude. The authors discussed the importance of these area-level crime rates to be constructed in smaller geographic units and to be considered a larger variety of crimes. Goldman-Mellor et al. [53] measured perceived neighbourhood safety with self-respondent answers and objectively measured neighbourhood violence using a geospatial index based on FBI Uniform Crime Reports. Their results showed an association for the first measure but not for the second one, suggesting that perception of neighbourhood violence matters more for mental health than objective levels. Velez-Gomez et al. [71] and Cuartas et al. [48] utilized both criminal statistical analyses and homicide rates. The first group encountered a positive association only for the outcome "ineffectiveness" in early adolescents (10-12 years), and the second group encountered a positive relationship Fig. 4 Forest plot of subgroups of studies that considered internalizing symptoms as outcomes for common mental disorders and post-traumatic stress disorder.
Gepty et al. [35] utilized criminal statistics classifying violent crimes and non-violent crimes and found a positive association with depressive symptoms for the first violent crime but not for non-violent crimes. Da Viera et al. [31], worked with criminal statistics related to adolescents' residence and school address and found that adolescents who live in areas with low crime and studies in areas with high crime have a larger chance of presenting anxiety, probably related to feelings of insecurity on the way to school.
---
Influence of gender, race, and age on the association between CV and internalizing mental health symptoms
Thirteen studies analysed gender as a moderator in the relation above, four of them found gender to be a potential moderator. Bacchini et al. [44] and Boney-McCoy et al. [69] found that girls are more affected by negative experiences of community violence than boys, reacting with high anxiety, depression, sadness and post-traumatic stress symptoms. Haj-Yahia et al. [55] found that girls had more internalizing problems than boys when they were victims of community violence but not witnessing, while Foster et al. [52] found a positive association between community violence and depressive and anxious symptoms only for witnessing but not for victims. The other seven works tested gender as a moderator and did not find differences between boys and girls in the association .
Only two studies, one conducted in Israel [60] with Arabic and Jewish subjects and another in Chicago [47] with Latinx, Black and White individuals, tested race as a moderator of the relationship between community violence and mental health symptoms. In the first study, Jewish subjects reported higher levels of witnessing community violence, while Arabs reported higher levels of victims of community violence and post-traumatic symptoms over the last year, but this ethnic affiliation did not moderate the relationship between community violence exposure and PTSD. Chen et al. [47] worked with a large multi-ethnic sample in Chicago and found that Latinx and Black adolescents were more exposed to community violence, had higher levels of depression and delinquency, and had more risk factors, such as low family warmth, peer deviance, school adversity and community violence exposure. In addition, the results from regression models showed a higher chance of depression for White adolescents than for minority adolescents (Black and Latinx), which is explained in light of the desensitization hypothesis [77,78].
The only study that considered age as a moderator of the relationship above was the one conducted by Gomez et al. [71]. Even so, the stratification occurred with an age group that did not fit our inclusion criteria (8-10 years), so the results were presented only for the interval 10-12 years.
---
Family support, communication skills, emotional regulation and contextual factors that affect adolescents' mental health when exposed to community violence
Other factors appear to be moderators of the association between community violence and mental health symptoms [26,44,56,58,63,65]. Sun et al. [42], O'Leary [64] and Gepty et al. [35]. The most frequent were family characteristics such as mother and father support, parental monitoring, sibling support, and communication skills. Bacchini et al. [44], Howard et al. [58] and Ozer et al. [65] described that parental monitoring/support could reduce depression and symptoms of distress. Talking with their parents and expressing their fears could make young people feel protected, reducing feelings of isolation and danger. Ozer et al. [65] also found that sibling support was protective against post-traumatic stress disorder symptoms and depressive symptoms in adolescents exposed to community violence; teacher help did not have a protective effect on either outcome, and a tendency to keep their feelings to themselves was demonstrated to be a protective factor against post-traumatic stress disorder symptoms [65]. Haj-Yahia et al. [55] and O'Donnell et al. [63] did not find differences in chances of depression and post-traumatic stress disorder for adolescents' exposure to community violence when family support was present or teacher support for the first.
Individual characteristics of personality and emotional functioning also appear in some studies as moderators. Le Blanc et al. [26] found that good communication and problem-solving skills protect adolescents' exposure to community violence from psychological stress. Sun et al. [42] encountered that internal dysfunction involving emotional dysregulation, such as self-harm, potentializes symptoms of post-traumatic stress disorder in adolescents exposed to community violence. O'Leary [64] found that expressive suppression, which refers to active inhibition of observable verbal and nonverbal emotional expressive behaviour, buffers the effect of community violence exposure on depression. Gepty et al. [35] studied the ruminative cognitive style, which is the tendency of an individual to be caught in a cycle of repetitive thoughts, and found that it also increases the chance of depression in adolescents exposed to violent crimes.
Contextual factors were also evaluated as moderators. Cuartas et al. [48] studied the effect of living in a poor household, having been directly victimized or witnessing a crime, perceived neighbourhood as unsafe and social support and found that the first three of them potentiate the chance for post-traumatic stress disorder in adolescents' exposure to community violence and that perceived neighbourhood as unsafe also worsens the chances of common mental disorders. O'Donnell et al. [63] analysed adolescents from The Republic of Gambia, Africa, and found that positive school climate function as a protective factor between community violence exposure and post-traumatic stress disorder, and it was stronger for witnesses than for victims.
Cultural factors related to ethnicity were also evaluated. Henry et al. [56] studied cultural pride reinforcement and cultural appreciation of legacy as potential moderators between community violence and depressive symptoms in a sample exclusively composed of African Americans. Cultural appreciation of legacy was found to be a protective moderator of this relationship, leading to the conclusion that teaching African American youth about their cultural heritage can help them cope with racial discrimination.
---
Different risks for different outcomes
Some studies analysed more than one outcome with the following distribution: depression (20), internalizing symptoms (16), post-traumatic stress disorder (15) and anxiety/stress (1). Different outcomes are associated with different magnitudes of community violence exposure, as shown in Table 1, and factors analysed as moderators of this association also act differently.
The graphs of meta-analysis in subgroups by outcome that showed a heterogeneity below 50% and were therefore presented in this review were the studies with posttraumatic stress disorder as outcome and internalizing symptoms. The summary measures for post-traumatic stress disorder outcome were greater than 1 (1.12, 95% CI 1.05-1. 19), while for the internalizing symptoms, the outcome was borderline (1.02, 95% CI 1.00-1.04).
---
Discussion
The results of qualitative synthesis reinforced a positive relation encountered in the previous meta-analysis between community violence exposure and internalizing mental health symptoms in adolescents [18,19]. The summary measure from 20 studies in quantitative synthesis showed a small but positive association. The proximity of community violence appeared to be an essential factor contributing to the risk of mental health symptoms. Adolescents who are victims of community violence are at greater risk than those who are witnessing community violence. The summary measures of the victim and witnessing subgroups could not be considered due to the high heterogeneity. Regarding the outcomes analysed, studies showed different risk magnitudes for different outcomes. The summary measures for post-traumatic stress disorder were positive and small but larger than those for the subgroup of internalizing symptoms.
Longitudinal studies provide stronger evidence than cross-sectional studies since they can establish cause and effect relationships [79]. Of the twelve studies with a longitudinal design included in this review, 10 showed at least one significant effect measure in the causal association between greater exposure to community violence and increased risk of developing internalizing mental disorders. This fact supports the idea that there is a causal association in this relationship. Regarding moderators mentioned in objectives (gender, age, and race), only female gender appeared to be a significant moderator in 4 studies. These differences between genders are also found in studies that consider externalizing symptoms; however, for this outcome, boys have more risk than girls when exposed to community violence. A possible explanation for this distinction is the difference in upbringing between boys and girls, especially in more traditional societies, where girls are encouraged to keep their emotions to themselves and to have more socially acceptable behaviour, while the boys are encouraged to reinforce their masculinity, sometimes through violent and deviant behaviour [80].
Age was also not tested in the majority of studies as a possible moderator. In the previous meta-analyses conducted by Fowler [19], which included children and adolescents, differences were found between these two stages of the life cycle, with teenagers having the greatest risk. In regard to teenagers, on the one hand, a tendency towards a greater circulation around the neighbourhood by older adolescents is expected when compared to younger adolescents, which can mean a higher exposure to community violence in the first group. On the other hand, the emotional maturation expected over the years can protect against the effects of violence on mental health. Given the scarcity of studies that assess this influence, we can point out this gap as an area to be researched in future studies. Race was tested as a moderator only in two studies -one with Jews and Arabs and the other with White, Black and Latinx subjects. The former study found that Latinx and Black adolescents are at higher chance of developing depression when exposed to community violence. It is important to highlight the fact that thirteen studies of the forty-two studies included in this review did not have any information about the race of participants. On the other hand, in the group of studies that classified race participants, some of them were composed exclusively of African Americans. It must be pointed out that the lack of this information, as well as the homogeneity of the samples, is an important failure of the studies. Previous meta-analyses could not evaluate race as modifiers because of these same problems [19]. As a counterpoint, a systematic review and metaanalysis showed that racism is linked to poor physical and mental health [81]. Since there is substantial gender inequality among victims of community violence, with boys more like to be victims, and racism is a critical factor that can influence mental health, it is important to study the effects of race on the association of community violence and mental health symptoms, as well as possible protective factors and interventions for this population [14]. The study by Henry et al. [56] is an example of how maternal messages of positive reinforcement of Black culture can protect against depressive symptoms in adolescents of this ethnic group who are exposed to community violence.
An important aspect to be highlighted that appears in our results and in previous meta-analyses is the phenomenon of desensitization. This phenomenon can occur in areas of high levels of community violence. With chronic and recurrent exposure, individuals do not present as many depressive and anxious symptoms after a certain degree of community violence in a process of naturalization of barbarism [73,77,78]. This phenomenon should not be interpreted as beneficial, as this naturalization of violence may have negative effects on other outcomes. In relation to externalizing symptoms, for example, aggressive behaviour and delinquency, we can see the opposite effect: there is an increase in these behaviours in a progressive and linear way with an increase in violence.
Most studies included in this review were conducted in the United States of America (27), followed by South Africa (4), Israel (3), Colombia (2), the Republic of Gambia (1), China (1), England (1), Switzerland (1), Italy (1), and Mexico (1). Globally, community violence varies according to region and country. According to the World Health Organization [82], homicide rates were highest in Latin America (84.4/100,000 in Colombia, 50.2/100,000 in El Salvador, 30.2/100,000 in Brazil) and lowest in Eastern European countries (0.6/100,000 in France and 0.9/100,000 in England) and Asia (0.4/100,000 in Japan). In this review, exposure rates to community violence were different between studies. For example, four studies conducted in Africa reported that 83.4% to 98.9% of subjects were witnesses of community violence, while 40.1% to 83.5% of subjects were victims of community violence [59,63,66,70]; in contrast, studies in the United States of America showed greater variation, witness of community violence (49-98%) and victim of community violence (10.3-69%) [26, 32-34, 36-39, 41, 43, 51-54, 56, 58, 62, 65, 68]. Part of this difference could be due to different methods for measuring community violence, but another part could be because of different population origins. Socioeconomic level, social inequalities, urban disorder, weather factors, and cultural factors can influence community violence exposure rates and can also influence how adolescents react to them [83][84][85]. Therefore, different territories can count on different levels of community violence and different ways to deal with it. Some studies of this review reinforced this aspect; for example, Cuartas et al. [48] studied the effect of contextual factors such as poverty in the neighbourhood and social support as potential moderators of the association of community violence and CMD and PTSD, confirming their hypotheses for the former. O'Donnell et al. [63] found that a positive school climate was a protective factor for youth who witnessed CVs about post-traumatic stress reactions. The authors highlighted the high levels of self-report hostile school climate that may reflect the school context's structural factors. However, considering the cultural aspects, any of the studies included in this review compared, for example, urban areas with rural areas. It would be an interesting comparison to examine. Considering these variations attributed to contextual and cultural factors, more studies conducted in different countries and cities would be relevant.
In this review, some studies analysed the difference between exposure to violence measured by statistical criminalities and community violence self-report questionnaires or perceived violence [53,54]. The authors found differences in their results, as described in section 3.3. The first methodology has relevance because it is less costly and simpler to conduct and therefore has importance, especially in countries where there are few studies in this area. Nevertheless, studies that compare two forms of measuring violence (self-report and criminal statistics) can contribute to a better understanding of the differences between them.
There are strengths and limitations that should be considered in this systematic review. Strengths include an extensive search of databases, contact with authors for clarification and no filters applied for year or language in the search, all of them contributing to a larger body of literature. Methodologies were constructed with alternate pairs of studies in the selection and extraction phase to avoid selection bias and errors in extraction. Studies included in the review were composed mostly of adolescents from schools or population-based samples and not from mental health services or other types of institutions, leading to a more representative sample. This review utilized a community violence concept that excludes sexual and school interpersonal violence, focusing on and estimating the effect of such violence on adolescents' mental health, which we considered a strength since it brings more specificity to the results. The main limitations were that different tools for exposure and outcome measures were used, leading to heterogeneous results and compromised pooling. Study designs and statistical analysis also differed between studies, which made comparison difficult.
---
Conclusion
This review confirmed a positive relationship between community violence, excluding sexual assault and school violence, and internalizing mental health symptoms in adolescents. Even though race and age did not appear to be moderators in most of the studies, girls were more sensitive to the effects of the exposure in some studies, showing that gender can be a possible moderator in this relationship. Other factors, such as family constitution, communication skills and emotional functioning, also seem to have an influence on this association.
This review provides relevant information regarding the health and public safety field and can serve to direct public efforts to build policies to address the prevention and treatment of both community violence and mental disorders. This review also contributes to knowledge of these issues among health and education professionals.
---
Supplementary Information
The online version contains supplementary material available at https:// doi. org/ 10. 1186/ s12888-022-03873-8.
---
Additional file 1.
---
Additional file 2.
---
Additional file 3.
---
Additional file 4.
---
Authors' contributions
The listed authors conceived the project (CM, CL), developed the protocol (CM, CL), carried out the searches (CM), carried out the selection and extraction phase (CM, DF, VC), interpreted the findings (CM, JV, CL, WJ, DF, VC), drafted the manuscript (CM, JV, DF, VC), and approved the manuscript (CL, WJ). All authors have read and approve the manuscript.
---
Availability of data and materials
All data generated or analysed during this study are included in this published article (and its supplementary information files).
---
Declarations
Ethics approval and consent to participate Not applicable.
---
Consent for publication
Not applicable.
---
Competing interests
The authors declare that they have no competing interests.
---
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 40,535 | 1,524 |
7609898d6b320d758880931fb3e086c60edfa2a1 | Decomposition of educational differences in life expectancy by age and causes of death among South Korean adults | 2,014 | [
"JournalArticle"
] | Background: Decomposition of socioeconomic inequalities in life expectancy by ages and causes allow us to better understand the nature of socioeconomic mortality inequalities and to suggest priority areas for policy and intervention. This study aimed to quantify age-and cause-specific contributions to socioeconomic differences in life expectancy at age 25 by educational level among South Korean adult men and women. Methods: We used National Death Registration records in 2005 (129,940 men and 106,188 women) and national census data in 2005 (15, 215, 523 men and 16,077,137 women aged 25 and over). Educational attainment as the indicator of socioeconomic position was categorized into elementary school graduation or less, middle or high school graduation, and college graduation or higher. Differences in life expectancy at age 25 by educational level were estimated by age-and cause-specific mortality differences using Arriaga's decomposition method. Results: Differences in life expectancy at age 25 between college or higher education and elementary or less education were 16.23 years in men and 7.69 years in women. Young adult groups aged 35-49 in men and aged 25-39 in women contributed substantially to the differences between college or higher education and elementary or less education in life expectancy. Suicide and liver disease were the most important causes of death contributing to the differences in life expectancy in young adult groups. For older age groups, cerebrovascular disease and lung cancer were important to explain educational differential in life expectancy at 25-29 between college or higher education and middle or higher education.The contribution of the causes of death to socioeconomic inequality in life expectancy at age 25 in South Korea varied by age groups and differed by educational comparisons. The age specific contributions for different causes of death to life expectancy inequalities by educational attainment should be taken into account in establishing effective policy strategies to reduce socioeconomic inequalities in life expectancy. | Background
Investigation into contributions by specific causes and age groups to absolute socioeconomic inequalities in total mortality is important to understand mechanisms of socioeconomic health inequalities and to establish policies and intervention programs to reduce socioeconomic inequalities in health. Many studies have reported the contribution of causes of death in specific age groups to socioeconomic mortality inequalities in Asia as well as in western countries [1][2][3][4][5]. They revealed that the pattern of the contribution by specific causes of death varied by countries, which informs of different policy priorities for different countries.
Life expectancy is the expected years of life of a person remaining at a given age and a summary measure for mortality determined by the probability of death at each age [6]. It has important strengths in that it can be more easily understood than the age-standardized mortality rates by the public and compared between countries or changes over time [7][8][9]. In addition, life expectancy can be decomposed by death causes and specific age groups, which allows us to better understand mechanisms of socioeconomic inequalities in mortality.
Decomposition of socioeconomic inequalities in life expectancy by ages or causes has been mainly performed in western countries [10][11][12]. Some studies showed agespecific contributions to socioeconomic inequalities in life expectancy over time [7,10,13,14] while other studies reported patterns of cause-specific contributions [6,[11][12][13][14]. However, there is still a paucity of studies investigating age and death cause contributions socioeconomic difference in life expectancy by socioeconomic position (SEP) with use of national data covering whole population. This study aimed to quantify age-and cause-specific contributions to socioeconomic differences in life expectancy at age 25 by educational level among adult men and women in South Korea (hereafter 'Korea') to provide evidence guiding intervention priorities.
---
Methods
---
Study subjects
We used national death certificate and census data in 2005 from Statistics Korea. The number of total deaths aged 25 and over was 239,166 in 2005. After excluding data without any information on level of education, causes of death or age being missing or inaccurate, the present study included 236,128 deaths (98.7% of total deaths, 129,940 men and 106,188 women). In 2005 national census, 15, 215, 523 men and 16,077,137 women aged 25 and over were identified and included in this study.
By law, all deaths must be reported to Statistics Korea within a month of their occurrence in Korea. Death registration in Korea is known to be complete for deaths occurring among those aged 1+ years since the mid-1980s [15]. Death certification by a physician was suggested as a very important factor to improve accuracy in reporting causes of death in Korea [16,17]. The proportion of death certified by physicians was 86.9% in 2005. The reliability of the educational level in death certificate data was reported to be substantial [18].
This study was approved by the Asan Medical Center Institutional Review Board, Seoul, Korea.
---
Socioeconomic position (SEP) indicator
A level of own education was used as the SEP indicator of this study. Educational attainment was categorized into elementary school graduation or less, middle or high school graduation, or college graduation or higher. Elementary school and high school in Korea correspond to the International Standard Classification of Education (ISCED) 1 and ISCED 3, respectively whereas there is no schooling system in Korea relevant to ISCED 4 [19]. College is classified as ISCED 5.
Educational achievement among Korean population during the past decades has been remarkable along with the huge economic development. The enrollment rate in elementary school was 69.8% in 1951 but reached to 97.7% in 1980 and 98.6% in 2012 [20]. An explosive increase was observed for the enrollment rate in college or higher education skyrocketing from 4.2% in 1965 to over 60% in 2005. Thus, a very different educational distribution with age groups can be found in Korea. For example, 61.5% of women at age 25-29 years are classified into college of higher graduation while 76.1% of women at age 60-64 years are classified into elementary school graduation or less in 2005 [21].
---
Statistical analysis
For life expectancy at age 25, life tables were constructed using 5-year probabilities of death by educational level. 5-year probabilities of death were calculated based on the age-specific death rates which were estimated from the number of death in death certificate data and the number of population in census data by age and educational level. Differences in life expectancy at age 25 by educational level were calculated.
Age-and cause-specific contributions to the educational differences in life expectancy at age 25 were estimated using Arriaga's decomposition method [22]. The Arriaga method which has been widely used to decompose differences in life expectancy concerns a direct effect, an indirect effect, and an interaction effect of mortality difference on life expectancy. The direct effect reflects a consequence of a mortality difference in that age group. The indirect effect is due to a change in the number of survivors at the end of that age interval from a mortality change within a specific age group. The interaction effect results from the combination of the changed number of survivors at the end of the age interval and the lower (or higher) mortality rates at older ages. The total contribution of each age group to the change in life expectancy can be calculated by adding the direct, indirect and interaction effect [22,23]. By Arriaga's decomposition method, the difference in life expectancy can be decomposed into ages and causes of death which enable us to explain life expectancy differentials in terms of the contribution of each factor. Higher mortality rate in low SEP than high SEP makes a positive contribution to socioeconomic differences in life expectancy. In other words, a positive contribution refers a contribution to the increase in educational differentials in life expectancy. The total life expectancy differential by SEP is the sum of the number of years attributed negatively or positively by deaths in each age group or cause.
Life expectancy was calculated by causes of death. A total of 8 broad and 17 specific (15 for men and 14 for women) causes of death were selected based on the main causes of death in South Korea [24] (see Table 1).
Causes of death were coded using the 10 th version of the International Classification of Disease (ICD-10).
---
Results
Table 2 shows numbers of subjects and deaths and life expectancy at age 25 according to educational level. Middle or high school graduates accounted for about half of total subjects among both men and women (50.3% for men and 52.1% for women) while the numbers of deaths were the greatest among those with elementary school graduation or less (41.9% for men and 57.6% for women). Life expectancy at age 25 was 48.39 years in men and 54.75 years in women, respectively. Life expectancy stepwisely increased with education levels. Differences in life expectancy at age 25 between college or higher education and elementary or less education were 16.23 years in men and 7.69 years in women.
Figure 1 shows age-specific contributions to the educational gap in life expectancy among Korean men and women. In men, those aged 40-44 as a single age group contributed most (13.9%) to educational differences in life expectancy at age 25 between college or higher education and elementary or less education. Contributions of ages between 35 and 49 to the educational differences in life expectancy were greater than those of ages other age groups. This was true for the educational differences between college or higher education and elementary or less education and true for the differences between middle or high school and elementary or less education. Meanwhile, older age groups aged 60-64 and over contributed significantly to the educational differences in life expectancy between middle or high school and college or higher education.
Figure 1 also presents age-specific contributions in Korean adult women. Among women, younger age groups between 25 and 39 showed greater contributions than other older age groups. This was true for educational differences between college or higher education and elementary or less education and true for between middle or high school and elementary or less education. Meanwhile, older age groups aged over 65 contributed significantly to the educational differentials in life expectancy.
Table 3 presents cause-specific contributions to the life expectancy gap by education in Korean men. In broad causes of deaths, the contributions by cancers were greater than those of cardiovascular diseases in men while in women the contributions by cardiovascular diseases surpassed the contributions by cancers. This pattern was true for all the comparisons between educational levels considered. In both men and women, the contributions by external causes were significant, substantially accounting for total educational differences in life expectancy (about 28-29% in men and 20-24% in women). Table 3 also shows contributions by specific causes. Liver disease, suicide, transport accident, cerebrovascular disease, and lung cancer played important roles in explaining educational differences in life expectancy in men. Especially, the most important contribution among specific causes was made by liver disease, explaining about 9-12% of total educational differences in life expectancy between college or higher education and elementary or less education and between middle or high school and elementary or less education. These large contributions were not found in women. In addition, suicide in men was the most important contributor to the educational differentials in life expectancy between middle or high school and college or higher education and the second most important contributors to other educational differences. In women, cerebrovascular disease, suicide, transport accident, liver disease, and diabetes mellitus were the main contributors to life expectancy differences by educational levels. Among those, contributions by cerebrovascular disease and suicide were most important. The leading cancers in Korea, lung, stomach, and liver cancers, showed relatively greater mortality rates in low education groups than high education group and thus positively contributed to the educational differences in life expectancy among both men and women. However, prostate cancer and colorectal cancer among men and breast cancer and colorectal cancer among women contributed negatively to the differences in life expectancy by some educational levels. Ill-defined causes were also in accounting for educational life expectancy differences in both men and women.
Figure 2 presents patterns of contributions by major causes of deaths to educational life expectancy differences by age groups. In men, suicide and liver disease contributed significantly to the educational differences in life expectancy in younger ages between 35 and 49, while major contributions by lung cancer and cerebrovascular disease were found among men aged 60 or over. Similar findings were observed among women. Suicide and liver disease showed important contributions in younger age groups such as ages 35-39 while in older age groups of women diabetes mellitus, cerebrovascular disease, and ischaemic heart disease contributed significantly to the education life expectancy differences. Meanwhile, the magnitude of contribution by ischaemic heart disease was small or negative in older age men.
---
Discussion
Differences in life expectancy at age 25 between elementary or lower education (6 or less year schooling) and college or higher education (13 or more year schooling) in Korea were 16.23 years in men and 7.69 years in women.
In Finland, the differences in life expectancy at age 30 between low education with 9 or less year schooling and high education with 13 or more year schooling were 6.96 years in men and 3.88 years in women in 1998-99, whereas the same educational differences in life expectancy at age 30 in Russia were 13.08 years in men and 10.21 years in women in 1998 [25,26]. The differences in life expectancy at age 25 between primary or lower education and university education in Lithuania were 16.75 years in men and 15.20 years in women in 2001 [27]. In Denmark, the educational differences in life expectancy at age 30 between primary or lower secondary education and tertiary education were 6.4 years in men and 4.7 years in women in 2011 [28]. Although it is hard to directly compare the magnitude of educational differences in life expectancy between countries due to the different educational categories and the study periods, results of this study suggest that the size of the educational differences in life expectancy seems to be relatively greater than that in northern European countries. Younger age groups were more important contributors to the educational differentials in life expectancy between elementary or less education and other two higher educational groups, while older age groups were more important in explaining the difference between middle or high school education and college or higher education. This was generally true for both men and women in this study. This may mean that the dismal effects of poor socioeconomic environment would appear at younger ages among people with extreme social disadvantages (i.e., elementary or less education among young ages). In Korea where the enrollment rate in middle school increased from 42% in 1970 to over 90% in 1990 [20], only 0.4-13.0% of people aged 25-49 had elementary or less educational attainment [21]. People with elementary or less education aged less than 35 years may signify the extreme social exclusion in Korea. These young and socially marginalized population in Korea might have experienced neo-liberal structural reforms resulting in increases in unemployment rates, enhancement of labor flexibility, and rise of income inequality as well as lack of generous social safety net during the periods of the economic crisis in 1998 and the credit card crisis in 2003. The main contributing causes of deaths at those age groups to the educational differences in life expectancy were suicide and liver disease in both genders.
Korea has recorded the highest suicide rates among the Organization of Economic Co-operation and Development (OECD) member countries starting 2003 with upsurges during the Korea's economic crisis in late 1990s and during the credit card crisis in 2003 [29,30]. Suicide is the most frequent cause of deaths in 20s' and 30s' men and women in Korea, although the elderly had the greater suicide rates than younger age groups [31]. Prior Korean studies showed that men and women aged 35-44 had greater educational differentials in suicide mortality in both relative and absolute terms than older age groups [30,32]. Results of this study as well as other prior studies suggest that social changes into harsher labor market environment might have had a greater impact on socioeconomically marginalized educational groups with younger ages who did not have sufficient resources and skills to overcome socioeconomic difficulties in late 1990s and early 2000s.
The main risk factors for liver disease are viral hepatitis and alcohol abuse [33,34]. According to the 2007 National Health and Nutrition Examination Survey of Korea by the Korea Centers for Disease Control and Prevention, the prevalence of the hepatitis B antigen positive among Koreans aged 19-49 is 2.1-4.3% and the prevalence of hazardous alcohol use is 44.5-45.8% [35]. Considering the relatively high rate of hepatitis B infection and alcohol abuse in Korea, social inequalities in hepatitis B viral infection and hazardous alcohol use might well have contributed to the significant part of the socioeconomic inequalities in liver disease [36][37][38].
Cerebrovascular disease and lung cancer in older age groups were the important causes of death in terms of differentials in life expectancy at age 25 especially between middle or high school and college or higher education. Cerebrovascular disease may be related to adverse childhood living conditions, along with liver disease, liver cancer and stomach cancer [39,40]. Poor socioeconomic environments and their inequitable distribution during and after Japanese colonial occupation and the Korean War (1950)(1951)(1952)(1953) might have had effects on the socioeconomic inequalities in mortalities from these causes. The prevalence of cigarette smoking, the main risk factor of lung cancer, reached over 50% before early 2000s with the highest rate being about 79% in 1980 among Korean men [41]. High smoking rates and high absolute differentials in smoking rates by educational level [42] might have contributed to the increase in mortality and mortality inequalities from lung cancer, especially in men. The percent contribution by cerebrovascular disease to the educational difference in life expectancy was greater among women than men while the percent contributions as well as absolute contributions (years) by lung cancer, stomach cancer, and liver cancer were greater in men than women. These results are similar to a previous study showing that, in women, the contribution of cerebrovascular disease was greater than that of cancer in southern and eastern European countries [1].
The biggest difference of the results in this study from findings in northern or western European countries is the size of the contribution by ischaemic heart disease to socioeconomic inequalities in mortality as indicated in prior Korean studies [32,43]. This study revealed that the contribution of ischemic heart disease was relatively small in Korea, accounting for 1-2% and 3-6% of total educational inequalities in life expectancy in men and women, respectively. Meanwhile, ischaemic heart disease was the most important contributor to total mortality inequalities in northern and western Europe [1,4]. However, mortality rates and absolute socioeconomic inequality in ischaemic heart disease are increasing rapidly in Korea [43]. Considering the secular trend of the westernized diet as risk factors for ischaemic heart disease in Korea [44], thorough monitoring on changes in socioeconomic inequalities of ischaemic heart disease is needed.
Our study has strengths and limitations. We presented age-and cause-specific contributions to the socioeconomic inequalities in life expectancy at age 25 using Arriaga's decomposition method, while most previous studies showed only age-specific contributions and/or cause-specific contributions. Detailed quantification of age-and causespecific contributions to socioeconomic inequalities in life expectancy allowed us to present varying age-specific contributions by each cause of deaths and to develop priority age groups and causes of deaths for each cause and age group. However, we used unlinked data with death certificate and census data which may produce a numerator-denominator bias [45]. A prior Korean study examined this issue [18]. When the educational level was categorized into three categories (elementary school or less, middle or high school graduate, college or higher), the percentage agreement between death certificate data and health survey data was 89.4% and the kappa value was 0.75 [18], which means the reliability level was substantial [46]. Thus, we believe that the numerator-denominator bias would be minimal.
---
Conclusions
Educational differences in life expectancy were substantial in Korea. Liver disease and suicide were important contributors to the differences among younger age groups while cerebrovascular disease and lung cancer were important among older age groups. The age specific contributions for different causes of death to life expectancy inequalities by educational attainment varied with educational comparisons. Different age-specific distributions in educational levels due to remarkable improvement in education during the past decades may explain the findings as each educational attainment as SEP can have distinct meanings in the context and history in the Korean society. Exploring age-and cause-specific contributions to socioeconomic inequalities in life expectancy could allow us to better understand the nature of socioeconomic mortality inequalities and to specifically suggest priority areas for policy and intervention.
---
Competing interests
The authors declare that they have no competing interests.
Authors' contributions KJC participated in study design and drafted the manuscript. YHK conceived the original idea for the study and gave critical comments on the draft manuscript. HJC participated in study design and critical revision of the manuscript. SCY supervised study design, performed the statistical analysis and gave critical comments on the draft manuscript. All authors read and approved the final manuscript. | 21,239 | 2,092 |
d6a061a0660ffbfadfef08338d59f8fccf854fed | Fathers’ Involvement with Their Nonresident Children and Material Hardship | 2,011 | [
"JournalArticle"
] | Children in single-parent families, particularly children born to unmarried parents, are at high risk for experiencing material hardship. Previous research based on cross-sectional data suggests that father involvement, especially visitation, diminishes hardship. This article uses longitudinal data to examine the associations between nonresident fathers' involvement with their children and material hardship in the children's households. Results suggest that fathers' formal and informal child support payments and contact with their children independently reduce the number of hardships in the mothers' households; however, only the impact of fathers' contact with children is robust in models that include lagged dependent variables or individual fixed effects. Furthermore, cross-lagged models suggest that material hardship decreases future father involvement, but future hardship is not diminished by father involvement (except in-kind contributions). These results point to the complexity of these associations and to the need for future research to focus on heterogeneity of effects within the population. Today, more than one in four U.S. children (26 percent) lives with only one parent (U.S. Census Bureau 2010). Moreover, half of all children born in the last several decades are predicted to spend some portion of their childhood in a single-parent family (Bumpass and Sweet 1989). Further, 41 percent of all births today are to unmarried mothers, and that figure is nearly 70 percent among black mothers (Hamilton, Martin, and Ventura 2009). Although some children in single-parent families live with their fathers, the overwhelming majority (84 percent) live with their mothers and have a living nonresident father (U.S. Census Bureau 2010). Research suggests that children growing up in single-parent families, particularly children born to unmarried parents, are much more likely to be poor and to experience more material hardships than those in two-parent families (Lerman 2002;DeNavas-Walt, Proctor, and Smith 2008). As a consequence, children in single-parent families also face disadvantage in a number of important domains: health, development, and educational attainment (McLanahan and Sandefur 1994;Magnuson and Votruba-Drzal 2009). Nonresident fathers' involvement in their children's lives, both through their financial contributions and their physical involvement, can ameliorate some of these disadvantages. Research suggests that child support payments from fathers increase income and reduce poverty in custodial mothers' households (Meyer and Hu 1999;Bartfeld 2000;Sorensen and Zibman 2000); however, other research suggests that payments from poor fathers are either too small or inconsistent to improve financial well-being in the mothers' household (Mincy and Sorensen 1998; Cancian and Meyer 2004b). Research also finds that child support |
income is associated with such positive measures of child well-being as cognitive skills, educational attainment, and child behavior (Graham, Beller, and Hernandez 1994;Knox and Bane 1994;Hernandez, Beller, and Graham 1995;Knox 1996;Argys et al. 1998). Unfortunately, only a minority (approximately 20 percent) of unwed nonresident fathers pay formal child support (Nepomnyaschy and Garfinkel 2007), yet an overwhelming majority of these fathers are involved with their children in other ways. Examples of this involvement include informal and in-kind contributions, as well as regular contact with their children (Waller and Plotnick 2001;Huang 2006;Nepomnyaschy 2007;Garasky et al. 2010;Nepomnyaschy and Garfinkel 2010). Much less research focuses on how these contributions of time, money, and goods affect children's economic circumstances. This study examines the effects of these different types of father involvement on children's experience of material hardship.
Although poverty is the most commonly used indicator of serious economic distress, indicators of material hardship are complementary, and now commonly used, alternative measures (Beverly 2000(Beverly , 2001)). Such indicators include going without food, being evicted from one's home, delaying needed medical care, and having heat, electricity, or phone service shut off. These are not only important mediators of the relation between poverty and child well-being; they are found to be directly related to child well-being, and the relations are independent of income (Beverly 2001;Gershoff et al. 2007). For example, results from analyses that control for household income or poverty status find that children who live with food insecurity have worse health, lower cognitive skills, worse academic performance, and more behavior problems than those who do not live with food insecurity (Alaimo, Olson, and Frongillo 2001;Alaimo, Olson, Frongillo, and Briefel 2001;Cook et al. 2004;Ashiabi 2005;Slack and Yoo 2005;Whitaker, Phillips, and Orzol 2006;Rose-Jacobs et al. 2008;Zaslow et al. 2009). Some studies control for other indicators of socioeconomic status, pointing to the deleterious effects that multiple and cumulative hardships have on child wellbeing (Ashiabi and O'Neal 2007;Cook et al. 2008;Yoo, Slack, and Holl 2009;Frank et al. 2010).
---
How Father Involvement Affects Material Hardship
Low income is obviously one principal determinant of material hardship. Although material hardship disproportionately affects children living in poverty, 65 percent of families living between 100 and 200 percent of the poverty threshold are estimated to experience one or more hardships (Boushey et al. 2001;Gershoff 2003). Furthermore, the material hardship's respective correlations with income and poverty status are weaker than might be expected (Mayer and Jencks 1989;Cancian and Meyer 2004a;Short 2005;Sullivan, Turner, and Danziger 2008).1 There are at least two reasons for the modest size of the correlations. First, current income (the measure on which poverty status is most often based) is not a comprehensive indicator of a family's economic circumstances. In-kind transfers are not counted as income, nor is wealth or access to credit. All three of these resources may enable families to avoid hardship during periods of unemployment or other shocks to income (Shapiro and Wolff 2001;Sullivan et al. 2008). So too, Kathryn Edin and Laura Lein (1997) show that low-income mothers use a number of survival strategies to avoid hardship. For example, they may rely on social programs, friends, family, and underground employment. None of these strategies is usually included in income measures. Second, material hardship may result not only from a lack of resources but also from difficulty managing those resources (Heflin, Corcoran, and Siefert 2007). For example, in analyses that control for income and other indicators of socioeconomic status, families in which there are members with drug problems, alcohol problems, depression, or other indicators of poor mental health are found to experience more material hardship than families in which members lack those characteristics (Heflin et al. 2007;Sullivan et al. 2008). Fathers' contributions of time, goods, and money can affect mothers' resources and their ability to manage such resources.
---
Fathers' Material Contributions and Children's Hardship
Nonresident fathers' material contributions consist of formal cash support (that is paid through the formal child support system), informal cash support (cash that is given outside the formal obligation), and noncash support (in-kind contributions). Edin and Lein (1997) describe the numerous ways in which mothers use contributions from fathers to improve the economic circumstances in their households. For example, nonresident fathers' material support, whether it is provided through formal support, informal cash payments, or in-kind contributions, can directly affect the level of hardship in the mothers' house and can increase the household's income. Because cash contributions supplement the mother's income, they can readily be used to pay rent, utilities, and other bills, as well as to purchase food, clothing, and other necessities. Fathers' in-kind contributions can directly reduce hardship (e.g., contributions of food or clothing) or can allow the mother to address other needs with the income she would have spent on those items. Fathers can also reduce hardship by offering to pay rent, utilities, or telephone bills directly.
Although fathers' provision of material support (formal support, informal cash support, and in-kind support) can reduce hardship in the mother's household, these different types of support are not interchangeable and could have different effects on material hardship (Nepomnyaschy 2007;Garasky et al. 2010). Formal support, which usually arrives in the mail at regular monthly intervals, may be more stable than informal support. It may allow the mother to plan for expenses and to avoid hardships. However, high levels of unemployment, prior incarceration, and other factors may impede efforts by many lowincome fathers to make regular child support payments through the formal system (Mincy and Sorensen 1998;Cancian and Meyer 2004b;Geller, Garfinkel, and Western 2008;Swisher and Waller 2008). In addition, the conditions that state policies impose on welfare benefits require recipients to relinquish their rights to formal child support collected on their behalf; in the majority of states, mothers on welfare receive none of the support provided by fathers (Roberts and Vinson 2004). These conditions therefore provide fathers with an incentive to informally contribute. Finally, fathers may have more control over how their payments are spent if they pay informally, leading to reduced hardship (Weiss and Willis 1985); however, informal support could increase hardship if mothers feel they must spend these contributions on items that are visible to the father (e.g., clothes, toys, or furniture), rather than on such necessities as rent or phone bills. It is also possible that fathers' material contributions could increase hardship if their provision of support leads to declines in support from friends, relatives, new partners, or other people in the mother's life. The reduction in other support would have to be greater than the support the father provides, however, and this seems highly unlikely.
---
Fathers' Visitation and Material Hardship
Fathers' physical contact with their children can also affect material hardship. Regular visits from fathers may constitute a free source of child care and may substitute for paid child care. Regular visits may also allow mothers to spend time in the labor force, increasing their income. Visitation may make the father aware of his child's needs and may induce him to directly help the mother avoid certain hardships. Informal and in-kind support is often provided when fathers come to see the child (Nepomnyaschy 2007;Garasky et al. 2010). Irregular cash or in-kind support of this sort (e.g., when a father pays a utility bill or the rent for a month) is not likely to be captured in the Fragile Families data by the measure of informal cash support or the measure of in-kind support. Fathers also may loan the mother money. As Yoram Weiss and Robert Willis (1985) theorize, a father's visit allows him to monitor how money is spent in the mother's household. Thus, fathers' visits can reduce hardship if mothers are induced to use money in ways that improve the well-being of the child. Finally, a father's regular visits and involvement with his child can reduce a mother's level of stress and provide a sense of security and stability. This sort of intangible support may help her to manage the financial resources available to her.
Fathers' visits could also increase hardship. If the parents' relationship is conflictual or violent, his visits could increase stress and contribute to an increase in hardship. Finally, hardship may increase if fathers consume resources (food) while in the household or if they discourage contributions from friends, relatives, new partners, or other sources. In sum, the effects of a father's time with his children, whether those of time spent in the mother's house or in his, could either reduce or increase material hardship in the mother's household.
---
Empirical Evidence of the Effects of Fathers on Material Hardship
In their landmark ethnographic study of the survival strategies of single mothers, Edin and Lein (1997) find that the overwhelming majority of mothers rely on informal support from their networks, particularly from the fathers of their children. Much qualitative research on low-income single mothers and fathers confirms these findings (Roy 1999;Waller andPlotnick 1999, 2001;Pate 2002Pate , 2006;;Heflin, London, and Scott 2009). Insofar as nonresident fathers' involvement can be considered an indicator of social support, there is much evidence to suggest that social support and social networks have protective effects that can reduce material hardship (Mayer and Jencks 1989;Lee, Slack, and Lewis 2004;Sullivan et al. 2008).
Little quantitative research focuses on the effect of nonresident fathers' involvement on material hardship in the mothers' household. The authors know of only two studies that examine this question, and this is the specific focus of only one study. In a study that controls for mothers' receipt of informal and formal child support, Bong Joo Lee and colleagues (2004) look at the effects of welfare receipt and work activities on four measures of material hardship among Temporary Assistance for Needy Families (TANF) recipients. They find that neither formal nor informal support is statistically significantly associated with rent, utility, or food hardship; they do find, however, that provision of formal support is statistically significantly associated with declines in the level of perceived hardship (a summary scale based on responses to four items that ask about feelings regarding one's own financial situation). Steven Garasky and Susan Stewart (2007) examine the effects of nonresident fathers' involvement (both financial and physical) on three measures of food insecurity in their children's households. They find that frequent visits (more than once per week) are consistently protective against food insecurity but that provision of child support is only statistically significantly protective against one measure of insecurity. They hypothesize that fathers make in-kind contributions while they are visiting and that such contributions are the mechanism through which fathers' visits affect hardship. However, they do not directly measure in-kind support and are not able to distinguish formal from informal cash support. Further, they measure hardship and fathers' involvement in the same time period.
The analyses in the current study build on this work in a number of ways: by disaggregating financial support from fathers into formal cash support, informal cash support, and in-kind contributions; incorporating temporal ordering by using panel data; and employing eight indicators of material hardship. The analyses also consider mothers' individual attributes that may affect their ability to avoid hardships. These include physical health, mental health, impulsivity, cognitive ability, and access to social support.
---
Data and Methods
This article uses data from the Fragile Families and Child Wellbeing Study, a panel study of approximately 4,000 children born to unmarried parents between 1998 and 2000 in 20 large U.S. cities in 15 states. It takes advantage of four waves of panel data, starting with a baseline interview conducted when the children were born and following them up to age 5. Mothers and available fathers were interviewed at the hospital within a few days of the child's birth; fathers who were not at the hospital were interviewed elsewhere. Follow-up interviews with both parents were conducted by telephone when the child was approximately 1, 3, and 5 years old. Data in the Fragile Families study are representative of births to unmarried parents in the late 1990s in all U.S. cities with populations of 200,000 or more (see Reichman et al. [2001] for a detailed description of the study design). Of the unmarried mothers interviewed at baseline, 89 percent were reinterviewed at the 1-year follow-up, 86 percent were reinterviewed at the 3-year survey, and 84 percent were reinterviewed at the 5-year follow-up. At each wave, mothers were asked numerous questions pertaining to fathers' characteristics. Their responses provide detailed information about fathers, even if the fathers were not interviewed.
The current study relies on mothers' reports about fathers' sociodemographic characteristics and involvement with their nonresident children. Although it would be ideal to have fathers' reports of their involvement with children, fathers were not asked about their child support payments to mothers at the 3-and 5-year interviews. In addition, though the Fragile Families survey was able to identify and interview a larger proportion of unmarried fathers than any other national survey, many fathers are missing from the data. 2 Estimates suggest that fathers missing from the data are more likely to be nonresident (the group on whom this study focuses) and are more disadvantaged on socioeconomic characteristics than those who were interviewed (Teitler, Reichman, and Sprachman 2003). Therefore, relying on fathers' reports could introduce nonresponse bias and could substantially reduce sample sizes across waves.
The sample in the current study consists of mothers who were not cohabiting with the focal child's father at each follow-up interview (1-, 3-, or 5-year follow-up) and who were reinterviewed at least at the 1-year survey. The majority of mothers (69 percent) participated in more than one follow-up interview. Stacking the three waves of follow-up data creates an unbalanced panel of 4,469 repeated observations on 2,180 unique mothers. The sample sizes are 1,373 at the 1-year interview, 1,478 at the 3-year interview, and 1,618 at the 5-year interview. The increase in sample sizes from wave to wave reflects the trend that unmarried parents' cohabiting relationships end over time; however, a small number of mothers are lost to attrition from wave to wave. 3 Besides excluding mothers who were cohabiting with the focal father, the sample also excludes those on whom data are missing for variables of interest at each wave. Specifically, data from father involvement variables are missing for 213 mothers at the 1-year survey, for 203 mothers at the 3-year survey, and for 257 mothers 2 Seventy-five percent of eligible unmarried fathers (those who were associated with an interviewed mother) were interviewed at the baseline survey. Their follow-up response rates were 65 percent at the 1-year survey, 63 percent at the 3-year follow-up, and 61 percent at the 5-year follow-up. 3 Of the 3,711 unmarried mothers in the baseline sample, 3,293 were reinterviewed at the 1-year follow-up. Of these, 50 percent (1,642) were not cohabiting with the father at that follow-up. At the 3-year interview, interviews were conducted with 3,009 mothers who were unmarried at baseline. Of these, 58 percent (1,731) were not cohabiting at that point. At the 5-year interview, follow-up interviews were conducted with 2,921 mothers who were unmarried at baseline. Of these, 66 percent (1,934) were not cohabiting at that time.
at the 5-year survey. Data on hardship variables are missing for 14 cases at the 1-year survey, for 19 cases at the 3-year survey, and for 8 cases at the 5-year survey. Data on covariates are missing for 63 mothers at the 1-year interview, for 57 mothers at the 3-year interview, and for 59 mothers at the 5-year interview. These criteria exclude a total of 207 mothers for whom observations are missing at every wave. Supplementary analyses based on balanced panel data (i.e., data in which each mother appears in all three follow-up waves) examine a subsample of 2,337 observations of 779 unique mothers.
---
Material Hardship
Material hardship, the outcome of interest, is measured using a series of questions posed in several national surveys, including the Survey of Income and Program Participation, the National Survey of America's Families, and the American Housing Survey (Beverly 2001). At all three follow-up surveys, mothers were asked whether they had to do any of the following things in the 12 months prior to the interview because there was not enough money: (1) receive free food or meals; (2) not pay the full amount of rent or mortgage payment; (3) not pay the full amount of a gas, oil, or electricity bill; (4) have gas or electric service turned off or oil not delivered; (5) have phone service disconnected; (6) be evicted from your home or apartment for not paying the rent or mortgage; (7) stay in a shelter, abandoned building, an automobile, or any other place that was not meant for regular housing, even for one night; (8) not seek medical attention for anyone in your household who needed to see a doctor or go to the hospital, because of the cost.
The primary measure of material hardship is based on the number of hardships that a family experienced in the 12 months prior to each wave. The number of affirmative responses to these measures was used to create a variable with a possible range from zero to eight; zero indicates that the mother reports no hardships, and a score of eight indicates that she responded affirmatively to all eight items. However, prior research points to the fact that each of these measures of hardship may have different antecedents, may lead to different consequences, and may represent very different types of problems (Beverly 2000(Beverly , 2001;;Ouellette et al. 2004;Heflin, Sandberg, and Rafail 2009;Rose, Parish, and Yoo 2009). Each hardship indicator is therefore also analyzed separately.
---
Father Involvement
Both fathers' financial and physical involvement with their children are considered in this study. The research examines three types of contributions: formal child support, informal child support, and in-kind support. Formal child support is support received through an established child support order. Informal child support is any cash support received from the father outside of a formal order. In-kind support includes clothes, toys, medicine, food, or other noncash support provided by the father.
Formal and informal cash support are measured by continuous variables that reflect the average amount of support provided per month at each wave for the period that the father was eligible to pay support. Fathers' eligibility to pay formal support is defined as the number of months elapsed since the start of the parents' child support order; for informal support, eligibility is defined as the number of months elapsed since the father stopped living with the mother. Eligibility for informal support is assessed at each wave (for fathers who never lived with the mother, it is the total reporting period at each wave). The authors choose to create a monthly amount of support received because the figure reported for the year preceding an interview is conflated with the length of time that a child support order has been in place or how long ago parents stopped cohabiting. For example, two mothers may report $1,000 of formal support received in the past year, but one obtained a child support order 2 months ago and therefore received $500 per month; the other mother has had an order for 10 months and therefore received only $100 per month. The effect of child support on economic circumstances in these two mothers' households will be very different, and using the yearly report of receipt masks those differences. Fathers who are reported to have paid no support are coded zero.
In-kind support is measured as a dichotomous variable. The variable is positive if the mother indicates that, in the year prior to the interview, the father bought clothes, toys, medicine, food, or other items for the child. The response is considered to be affirmative if the mother reports that he often or sometimes bought those items; it is considered to be negative if she reports that he rarely or never bought them. Fathers' physical contact with children is measured as the number of days on which he saw the child in the 30 days prior to the interview; the measure includes fathers who did not see their child (and, thus, whose days of contact is zero). Among the father involvement variables, the highest estimated correlation is between in-kind support and the number of days of contact (0.59), but the correlations are also high between the number of days of contact and informal support (0.34), as well as between in-kind and informal support (0.34). The lowest estimated correlation is between formal and in-kind support (0.06). That between formal support and days of contact is also estimated to be low (0.004). Formal support is estimated to be negatively and statistically significantly correlated with informal support (-0.07).
---
Covariates
The analyses consider three broad categories of covariates: sociodemo-graphic characteristics (of mothers, fathers, and children); measures of the father's commitment to the mother and child at the baseline survey; and indicators of the mother's ability to avoid hardship. Also considered is the unemployment rate in the city where the mother is interviewed at each wave. Unemployment is entered as a time-varying covariate. The mean rate of unemployment for the pooled sample is 5 percent.
Family sociodemographic characteristics-The analyses include mothers' reports about characteristics of mothers, fathers, and children. Parents' race or ethnicity is measured as non-Hispanic white, non-Hispanic black, Hispanic, and other. Parents' education is measured in three categories: less than a high school or general equivalency diploma, a high school or general equivalency diploma, and more than a high school or general equivalency diploma. Parents' age is also represented in three categories: under age 21, ages 21-29, and age 30 or over. In addition, the analysis considers whether the mother was born in the United States, whether she received TANF or food stamps in the year prior to the child's birth, whether the father worked in the week prior to the child's birth, the sex of the child, and whether the child was low birth weight. All these variables are measured at the baseline survey and do not vary over time. The analysis also includes several time-varying sociodemographic variables that are measured at each wave of the survey: age of the child (in months), the number of children under age 18 in the mother's household, the number of adults in the household, whether the mother has a new married or cohabiting partner, and the average monthly household income (minus child support received).
Previous research finds that many of these variables are related to both hardship and fathers' involvement with their children. The authors expect that children of more advantaged mothers will have more involved fathers (because these fathers are also more advantaged) and those mothers will therefore be more likely to avoid hardship.
Father's commitment to mother and child at baseline-This set of variables includes four items drawn from mothers' reports: the parents' relationship at the baseline survey (cohabiting, romantically involved but not cohabiting, just friends, or no relationship); whether the father contributed cash or any other resource during the pregnancy; whether he visited the mother and child in the hospital; and whether he intended to contribute to the child in the future. The father's commitment to the mother and child at baseline is likely associated with his investment in the child, as well as with the likelihood that he will contribute financially and be involved with the child. These fathers may also select mothers who are likely to avoid hardships.
Mother's ability to avoid hardship-Prior research uses many of the previously described variables as proxies for the mother's ability to avoid hardship. The extensive data in the Fragile Families survey enable this study to include explicit measures of such attributes. The baseline survey provides measures of the mother's access to social support and of her health. Access to social support is measured as the sum of the mother's responses (yes = 1, no = 0) to three questions about whether, in the year following the interview, she would be able to count on someone in her family to (1) loan her $200, (2) provide her with a place to live, and (3) help her with babysitting or child care. Possible scores range from 0 to 3; higher scores indicate more access to social support. Maternal health is measured as a dichotomous variable for whether the mother reports excellent health as opposed to very good, good, fair, or poor.
To measure mothers' cognitive ability, the study uses an eight-item word similarities test that is based on the Revised Wechsler Adult Intelligence Scale (WAIS-R).4 A six-item scale measures mothers' impulsivity. 5 The study also measures mothers' reports on the mental health of their mothers (i.e., the focal child's maternal grandmother). 6 The variables for mothers' cognitive ability, mothers' impulsivity, and maternal grandmothers' mental health are only measured at the 3-year survey; however, because the variables are assumed to be fixed over time, the analyses treat them as baseline measures. Finally, the study includes a measure of whether the mother reported at the baseline survey that she had a drug or alcohol problem.
Mothers who have more access to social support, who are less impulsive, have higher cognitive scores, and better mental and physical health are expected to have fewer hardships than mothers who do not have these characteristics. Prior research establishes a strong link between access to social support (particularly the ability to borrow money) and a reduction in hardship (Mayer and Jencks 1989;Lee et al. 2004;Sullivan et al. 2008). Mothers' physical health, mental health, and cognitive ability also are linked to hardship (Danziger et al. 2000;Kalil, Seefeldt, and Wang 2002;Heflin et al. 2007;Sullivan et al. 2008); however, these relations could be endogenous. For example, experiences of material hardship may lead to poor mental health, and poor mental health can affect the ability to manage resources. To minimize this specification problem, the analyses use the measure of grandmother's mental health as an exogenous proxy for mothers' own mental health. A mother's current problem with drugs or alcohol may also be endogenous to material hardship; therefore, the analyses include mothers' baseline report of a drug or alcohol problem.
---
Analytic Strategy
First, descriptive statistics are presented for all previously described measures for the full sample and disaggregated by whether the mother reports any hardships. Next, pooled, crosssectional, ordinary least squares (OLS) models are presented, which estimate the number of hardships in the mother's household on measures of fathers' involvement. Nested models first control only for sociodemographic characteristics and then add indicators of fathers' commitment to the mother and child. The models finally add measures of mothers' ability to avoid hardships (some of these measures were not available in prior research). Standard errors in all models are adjusted to account for multiple observations on each individual over time.
Selection bias-As with all observational studies, a number of potential biases limit the ability to make causal inferences. Families with fathers who pay support and visit their children may differ in unobserved ways from families with fathers who do not, and such differences may bias the estimated relations between fathers' involvement (either financial or physical) with their children and hardship in the mothers' household. In the extreme case, the estimated relation could be fully attributed to these unobserved differences, and there would be no causal relation between the two. For example, fathers may contribute less time and money to a custodial parent who is in poor mental health, has problems with drugs or alcohol, or has low cognitive skills and is not capable of making good choices for her family.
The study addresses this potential bias in three ways. First, as discussed previously, the analyses control for many characteristics that are generally unobserved in many prior studies. Second, the study takes advantage of the panel structure of the Fragile Families data by including a lagged dependent variable in the OLS model: the number of hardships at the prior wave. Third, the analyses estimate models with individual fixed effects, which only examine effects within individuals. Inclusion of a lagged dependent variable reduces the possibility that fixed, unobserved differences drive the results, because these differences should be reflected in the lagged dependent variable. However, effects are still estimated within and between individuals. Unobserved heterogeneity is therefore still possible. Individual fixed-effects models rely only on changes within individuals; using them eliminates the possibility that the results are driven by constant unobserved differences between individuals, though this method does not address unobserved within-person differences that change over time. One drawback of fixed-effects analysis is that results are estimated only for those individuals whose values on the dependent variable change over time and for those who are observed at least twice in the data. This leads to a less representative sample and one that is substantially smaller in size than the full analysis sample. Because the analyses hold constant all characteristics that do not vary over time (within individuals), these regressions only include the variables that change over time.
Supplementary analyses present results based on a balanced panel of observations. This panel includes only those cases in which the mother is observed in the sample at all waves. This analysis addresses the possibility that the results are driven by the mothers with the greatest number of observations, since these mothers contribute the most data. Other supplementary analyses examine each type of material hardship separately. Because each indicator of material hardship is a dichotomous variable, these analyses employ pooled, cross-sectional logistic regression models and fixed-effects logit models.
Reverse causality-Another potential source of bias is reverse causality. Specifically, material hardship in the mother's household may affect fathers' financial or physical involvement. For example, a mother may call on the father for help because she is having financial problems, and he may provide some financial assistance when he comes to see the child. These events would lead to a positive but spurious association between fathers' involvement and hardship; such an association could offset or dominate the true negative causal effect (if there is one). Reverse causality could also lead to a spurious negative association between involvement and hardship. For example, a mother's experience of hardship (phone disconnected or eviction) may prevent the father from visiting the child. One potential way to disentangle the temporal ordering of effects is to measure fathers' involvement and hardship at the prior wave, to explicitly test whether hardship at that prior wave affects fathers' involvement in the current wave, and to consider whether involvement at the prior wave affects hardship at the current wave.
The analysis uses Mplus software (version 4) to estimate these cross-lagged models within a structural equation modeling framework. In these models, the mean of father involvement at the 1-and 3-year surveys is used to predict hardship at the 5-year survey. So too, mean hardships from the 1-and 3-year surveys are used to predict father involvement at the 5-year survey. The models control for baseline characteristics as well as the lagged measure of the dependent variable (father involvement and hardship at the 3-year survey). The structural equation modeling framework, which estimates these reciprocal effects simultaneously, allows for the estimation of the effects of earlier father involvement on future material hardship independently of the effects of earlier father involvement on future father involvement and vice versa.
---
Results
---
Sample Description
Outcomes-Table 1 presents descriptive characteristics for the full sample of mothers who had nonmarital births and reported at each wave that they do not reside with the father of the focal child. Nearly half (49 percent) of the sampled mothers report that they experienced at least one of the eight hardships measured in these analyses. On average, sample mothers report experiencing 0.99 hardships in the year prior to the survey. Utility and phone bills account for the most commonly reported hardships; 27 percent of recipients report that they did not pay the full amount of a gas, oil, or electricity bill, and 23 percent report that their phone service was turned off. Other hardships are reported less frequently. Fifteen percent of participants report that they did not make the full rent or mortgage payment, 12 percent report that they received free food or meals, and 9 percent report that their gas or electric service was shut off or oil was not delivered. Six percent report that someone in their household needed to see a doctor or go to the hospital but did not go because of a lack of money. The least commonly reported hardships were eviction for failure to make rent or mortgage payments (3 percent) and staying in a place not meant for housing (3 percent).
On average, mothers reporting any hardship (2,171 participants; second column of table 1) report 2.04 total hardships and are much more likely than the full sample to experience each of the individual hardships. More than half (56 percent) of those who reported any hardship indicate that they did not pay all of a utility bill (gas, oil, or electricity) that was due, and 47 percent indicate that their phone service was shut off. Nearly one-third (30 percent) of these mothers report that they did not make a full rent or mortgage payment, and one-quarter report that they received free food or meals.
Father involvement-Nearly half of fathers (48 percent) reportedly made an in-kind contribution during the period they lived apart from their children. Slightly fewer (42 percent) reportedly made an informal cash contribution. Informal cash contributions average $53 per month across all fathers. Far fewer fathers (only 21 percent) reportedly made formal payments. These payments average $39 per month across all fathers. Not reported in the table is the shift over time from in-kind and informal support to formal child support. As the time from the child's birth (and the length of time since parents ceased cohabitation) increases, reported informal support (which is initially high) declines and formal support (which is initially low) increases. These amounts become approximately equal at 36 months after the child's birth. After the 36-month point, formal support is estimated to become greater than informal support. (For an analysis of the effects of child support enforcement on informal and formal support over time, see Nepomnyaschy and Garfinkel [2010].) Sampled mothers report that well over half (57 percent) of fathers had contact with their child in the 30 days prior to the interview. Across the sample, fathers are estimated to have contact with their child on an average of 7.5 of the 30 days prior to the interview.
Children living in families that reportedly experienced at least one hardship are found to be less likely to receive in-kind support from fathers than are children in families that report no hardship (46 percent vs. 50 percent). So too, children in families that report any hardship are found to receive less informal and formal cash support. They are reported to have less contact with their father (6.9 of the 30 days prior to interview) than children living in families with no hardships (8.1 days). These differences are statistically significant.
Family sociodemographic characteristics-The mothers in this sample report that they are mostly nonwhite (64 percent identify themselves as non-Hispanic black and 21 percent identify themselves as Hispanic), have low education (39 percent did not complete high school), and were relatively young at the time of the focal child's birth (38 percent were less than 21 years old). Most report that they were born in the United States (93 percent). Fathers' characteristics are reported to be similar to those of mothers, but fathers are older (80 percent were age 21 or older at the baseline interview; 62 percent of mothers were age 21 or older at that time). Only 59 percent of these fathers were reported to be employed in the week prior to the child's birth. Nearly half of the mothers (48 percent) report that they received TANF or food stamps at the time of the child's birth, and 12 percent report that the focal child was low birth weight (i.e., weighed less than 2,500 grams). On average and across the pooled years of data, mothers report that there are 2.4 minor children and two adults living in their households. Twenty-two percent of mothers report that they have a new married or cohabiting partner. On average, sampled mothers report approximately $1,700 of monthly household income across the pooled waves (this excludes income from child support).
Sampled black and white mothers are more highly represented among those reporting experience of a hardship (65 percent of black mothers, 14 percent of white mothers) than among those reporting no hardship (63 percent black, 11 percent white). The percentage of Hispanic mothers in the subsample reporting a hardship (19 percent) is smaller than that of counterparts who reported no hardship (24 percent). In general, mothers who report material hardship are found to have lower levels of education; they are more likely to not have completed high school and less likely to have a diploma, though they are slightly more likely to have some post high school education than mothers reporting no hardship. Mothers with any hardship are also more likely to have been born in the United States than those without material hardship. Over half (52 percent) of mothers with any hardship report receiving TANF or food stamps at the birth of the child, but these benefits were received by only 44 percent of mothers who report no hardship. Fathers' race, ethnicity, and age are estimated to be statistically significantly associated with the mother's report of hardship, though neither paternal educational attainment nor work status at the child's birth is associated with hardship to a statistically significant degree. Mothers experiencing hardship report a greater number of children and a fewer number of adults in the household at each wave than mothers who report no hardship. As expected, mothers experiencing any hardship report $500 less in monthly household income (nearly 25 percent less) than those who report no hardship. These differences are statistically significant.
Father's commitment to mother and child at baseline-At the baseline interview, one-third of mothers reported that they were cohabiting with the father, 42 percent reported that they were romantically involved but not cohabiting, 12 percent reported that they were friends, and only 14 percent reported that they had no relationship with the father. An overwhelming majority of fathers reportedly contributed cash or other items during the pregnancy, visited in the hospital, and intended to contribute to the child in the future.
Mothers who reported any hardship are more likely to have cohabited with the father at the time of the child's birth but are less likely to have been romantically involved with him than mothers who reported no hardships. None of the other variables measuring fathers' commitment is found to be statistically significantly associated with report of any material hardship.
Mother's ability to avoid hardship-Mothers report a high level of access to social support (the average score is 2.72 out of a possible 3 on this index); although only 30 percent of mothers reported that they were in excellent health at the baseline survey.
Participants have an average score of 2.09 on the impulsivity score (out of 4; higher is more impulsive) and a score of 6.41 (out of 16; higher is better) on the test of mothers' cognitive skills (the WAIS-R word similarities index). On average, mothers report that their mothers have 0.63 mental health problems (out of 4), and 6 percent reported at the time of the child's birth that they have their own problems with alcohol or drugs.
The levels of these hardship avoidance variables all differ to a statistically significant degree by whether mothers reported any hardship at the follow-up interviews. Mothers with at least one hardship report lower levels of access to social support, lower likelihood of being in excellent health, and higher levels of impulsivity than mothers who report no hardship. Mothers who experience any hardship report more mental health problems for their own mothers than those with no hardship. The likelihood of having a drug or alcohol problem is estimated to be greater among mothers reporting a hardship than among mothers with no reported hardship. Surprisingly, results from the WAIS-R test suggest that mothers with at least one hardship have higher scores on the cognitive skills test than mothers with no hardships.
---
Father Involvement and Material Hardship
Table 2 presents results from pooled cross-sectional OLS models that regress the number of material hardships (range 0-8) on fathers' involvement. The analyses presented here examine the full sample of mothers and control for different sets of covariates. Model 1 controls for sociodemographic characteristics of the respondent's family, model 2 adds controls for measures of fathers' commitment to the mother and child at the baseline survey, and model 3 adds controls for explicit measures of the mother's ability to avoid hardships.
The first point to consider in this table is that father involvement variables remain relatively stable across the models, as various controls are added. The measures of fathers' formal and informal cash support, as well as of the number of days of contact, are respectively, negatively, and statistically significantly associated with the number of hardships in the mother's household. The magnitudes of the coefficients are not reduced as additional controls are added for parent and child characteristics (except for a slight reduction in the size of the formal support coefficient). Results from model 3 suggest that a $100 increase in either monthly informal or formal cash support is associated with 0.05 fewer reported hardships, resulting in a 5 percent decline in the number of reported hardships (0.99 [total number of hardships for the full sample] -0.05 = 0.94 [a 5 percent reduction]). Across all models, each extra day of father-child contact per month is associated with 0.01 (or 1 percent) fewer hardships in the mothers' household. These estimates find no statistically significant association between fathers' in-kind support and the number of reported hardships.
Results in model 3 also suggest that the number of reported material hardships is not statistically significantly associated with the measures of maternal race, ethnicity, or age, once the other characteristics are included. In that model, mothers with more than a high school education are estimated to report 0.17 (or 17 percent) more hardships than those without a degree. The estimates for fathers' demographic characteristics reveal similar patterns, but there are a few differences. Fathers who are age 30 or older report 0.14 (or 14 percent) more hardships than fathers who are under age 21, and Hispanic fathers report 0.21 or (21 percent) fewer hardships than non-Hispanic white fathers, but neither of these coefficients is statistically significant at conventional levels in the fully controlled model (model 3). Some previous research finds that age (mostly mothers' age) is positively associated with several measures of hardship (Short 2005;Heflin et al. 2007; Parish, Rose, and Andrews 2009). So too, in some studies that control for income, non-Hispanic white mothers are found to be more likely to experience some types of hardship than are mothers of other racial and ethnic groups (Gundersen and Oliveira 2001;Bauman 2002;Heflin et al. 2007;Sullivan et al. 2008). Several prior studies that control for income and other sociodemographic characteristics also find either that education is not associated with hardship or that the associations run in unexpected directions (e.g., higher education associated with more hardship; Lee et al. 2004;Garasky and Stewart 2007;Leete and Bania 2009). Mothers who report receiving TANF or food stamps in the year prior to the child's birth are estimated to experience 0.17 (or 17 percent) more hardships than do those who report receiving no such benefits. As expected, the estimates in all three models indicate that mothers' reported monthly income (minus child support) is negatively and statistically significantly associated with the number of reported hardships; every $100 of income per month is associated with 0.01 (or 1 percent) fewer hardships. Finally, the unemployment rate in respondents' cities is estimated to be positively and statistically significantly associated with the number of reported hardships; estimates in model 3 suggest that each percentage point increase in unemployment is associated with 0.04 or (4 percent) more hardships.
Mothers who reported that they were romantically involved or just friends with the father at baseline are found to have fewer hardships than mothers who reported that they were cohabiting at that time. Perhaps mothers in these relationships have a more difficult time adjusting to the father's absence than mothers who have been living without the father since the birth of the child. Finally, the results suggest that most of the variables measuring mothers' ability to avoid hardships are statistically significantly associated with the number of reported hardships. Mothers' access to social support at baseline is associated with fewer reported hardships; mothers' impulsivity, grandmothers' mental health problems, and mothers' own drug and alcohol problems are associated with more hardships. Mothers who reported a drug or alcohol problem at baseline are estimated to experience 0.22 (or 22 percent) more hardships than mothers who do not report such a problem. As results in the bivariate models (table 1) suggest, mothers with higher scores on the WAIS-R measure of cognitive skills are estimated to report more hardships. One potential explanation for this puzzling result is that these mothers may be better at reporting hardship than mothers with lower scores on this measure.
Another important result from table 2 is the finding that the strength of some sociodemographic variables declines if other controls are added to the models, particularly controls related to mothers' ability to avoid hardships. The magnitudes of the coefficients for parents' race, ethnicity, age, and education are reduced by nearly half if these other variables are added. This finding confirms the importance of including these types of variables in studies of material hardship. It also suggests that the effect of demographic characteristics may be overestimated in previous studies' predictions of material hardship. However, the size of the monthly household income coefficient remains quite stable across models; this suggests that income (and fathers' involvement, as mentioned previously) is highly protective of material hardship, even after models are expanded to include controls for mothers' ability to avoid hardship.
---
Unobserved Heterogeneity
To address potential selection bias, table 3 presents a number of alternative specifications of the effects of father involvement on material hardship. Each column in the table represents a separate regression in which all previously discussed covariates are controlled. To facilitate comparison, the first column (OLS, unbalanced panel) repeats results from model 3 in table 2. The second column presents results from a model that includes the number of hardships from the previous wave (lagged dependent variable). The use of a lagged dependent variable should reduce unobserved heterogeneity in the results, yet the effects are still estimated within and across individuals. As expected, the number of hardships at the prior wave is found to be strongly associated with hardship in the current wave; each additional hardship at the 3-year survey is estimated to be associated with 0.40 (or 40 percent) more hardships reported at the 5-year survey. In the lagged dependent variable model, the coefficient for inkind support is estimated to be slightly larger than that obtained from the original OLS estimates. The use of the lagged variable reduces the size of the coefficients for formal and informal support, such that neither is statistically significant. The two models produce identical coefficients for the association between the number of days of contact and the number of reported hardships. This association remains highly statistically significant in both models. These results suggest that families with different levels of father involvement may differ in unobserved ways. These differences could drive the observed associations between father involvement and reports of hardship. However, fathers' contact with children is estimated to have a protective effect on hardship, and the effect does not appear to be driven by unobserved differences.
The third column in table 3 presents results from models that include individual fixed effects. These effects are estimated only within individuals, and the underlying analyses hold constant all of the within-individual characteristics that do not vary over time. These models should eliminate all possibility that static unobserved differences drive the results. In the fixed-effects model, the estimated coefficient for informal support is substantially smaller than that in the lagged dependent variable model, and the fixed-effects estimate for formal child support remains the same as that for the lagged model. The coefficient for days of contact remains unchanged and highly statistically significant across all three models. Results from the fixed-effects model again provide evidence that the negative association between the number of days of fathers' contact and the number of reported hardships is not driven by static unobserved differences between families.
In the fixed-effects model, the association between in-kind support and the number of reported hardships becomes strongly positive and statistically significant. Fathers' provision of in-kind support is associated with 0.16 (or 16 percent) more reported hardships. This result may suggest that reverse causality is a factor in this relation, such that mothers who experience material hardship may ask fathers for help, and fathers may respond by providing noncash contributions. For example, if a mother does not have money for groceries, she may call the father and he may purchase groceries for the household. This relation is observed when effects are estimated only within individuals, but it is suppressed in previous models, because those analyses average effects between and within individuals.
The last two columns of table 3 present results from a balanced panel of mothers. This panel includes only those mothers for whom information is available from all three follow-up waves. These balanced panel analyses are conducted to reduce the possibility that the results are driven by the mothers with the most observations, another form of selection bias. The sample for these analyses is smaller than that used in the study's other models, because this sample excludes mothers for whom observations are missing at any of the waves. The OLS column in the balanced panel presents results from pooled cross-sectional OLS models (comparable to results in the OLS column of the unbalanced panel). The fixed-effects column of the balanced panel presents results from models with individual fixed effects (comparable to results in the fixed-effects column of the unbalanced panel). In general, estimates for the balanced panel models are very similar to those for the original, unbalanced panel models. In the pooled cross-sectional OLS model from the balanced panel, the magnitudes of the coefficients for informal and formal cash support are somewhat larger (more negative) than those in all previous models from the unbalanced panel; the magnitude of the estimated coefficient for the number of days of paternal contact is unchanged, but it is not statistically significant. The fixed-effects results in the balanced panel are estimated to be nearly identical to those of the unbalanced panel's fixed-effects model, though the coefficients from the balanced panel are not statistically significant because the samples are smaller in these models.
---
Individual Indicators of Hardship
Table 4 presents estimates of the association of father involvement with the eight individual, dichotomous (yes or no) indicators of material hardship. The top panel presents results from pooled cross-sectional logistic regression models, and the bottom panel presents results from fixed-effects logistic regression models. The figures in the table are odds ratios, and zstatistics are presented in parentheses. It is not surprising that the associations of father involvement vary across the different measures of hardship. In the pooled cross-sectional models, informal cash support, formal cash support, and the number of days of father-child contact are respectively and negatively associated with the hardship indicators (seven of eight indicators), though not all of the coefficients are statistically significant. These results suggest that each of the three types of support protects mothers against the measured hardships. Neither informal cash support nor the number of days of father's contact is found to be related to whether a member of the mother's household did not see a doctor or go to a hospital because there was not enough money to do so. So too, the pooled cross-sectional estimates identify no relation between formal support and whether the family did not pay the full amount due for a utility (gas, oil, or electricity) bill. The results are less consistent for in-kind support. The results in the top panel suggest that in-kind support is positively associated with some hardships and negatively associated with others, although only two coefficients are found to be statistically significant, and they are only marginally so.
In the fixed-effects models (bottom panel of the table), sample sizes are much smaller than those in the pooled models because, as mentioned previously, fixed effects can only be estimated on individuals who experience a change on the dependent variable from wave to wave. Because these dependent variables are dichotomous, the chance that they will not change from wave to wave is much greater than would be the case in models that use a continuous measure of the number of hardships. Therefore, few of the coefficients reach statistical significance at conventional levels; however, the magnitudes of many of the coefficients are similar to or larger than those in the pooled cross-sectional models.
Following the pattern of the fixed-effects results for the continuous measure of the number of hardships (table 3), these fixed-effects models estimate that in-kind support is positively associated with each measure of hardship, though only three of the coefficients are statistically significant. The results suggest that receipt of in-kind support increases the odds that a mother will not make the full rent or mortgage payment owed (by 62 percent) and the odds that a mother's utilities will be shut off (by 89 percent). Mothers who receive in-kind support are estimated to have 3.5 times greater odds of staying in a place not meant for housing. The fixed-effects results for informal and formal cash support are much less consistent. Receipt of informal cash support is estimated to reduce a mother's odds of staying in a place not meant for housing by a statistically significant 42 percent. Receipt of formal support is estimated to reduce a mother's odds of receiving free food by 22 percent and her odds of being evicted by 57 percent.
The fixed-effects coefficient for the number of days of father-child contact in the month prior to survey is negatively and statistically significantly associated with four indicators of hardship. Each day of father-child contact is estimated to diminish the odds that a mother will report experiencing that hardship. Specifically, each day is estimated to reduce the odds of having phone service turned off by 2 percent, the odds of having utilities turned off by 2 percent, the odds of being evicted by 5 percent, and the odds of staying in a place not meant for living by 6 percent. The results thus far suggest that fathers' contact with children is consistently and negatively associated with hardship in the mothers' household. These associations persist across different models and specifications. The estimates for informal and formal cash support are less robust. In-kind support, by contrast, is found to be positively associated with hardship, but the association may be attributable to reverse causation.
---
Reverse Causality
In order to establish a temporal ordering of events and to explicitly examine the possibility of reverse causality, cross-lagged models are estimated. The results of these models are presented in table 5. Each stub-column item represents a cross-lagged model, and each model controls for previously discussed covariates that do not vary over time. The table presents the results of estimates with five dependent variables; the first two columns present results of estimates for the number of hardships reported during the year prior to the 5-year follow-up survey, and the last two columns present results for the four measures of father involvement (also measured at the 5-year survey). The independent variables of interest (lagged variables) represent averages across the 1-and 3-year surveys. (Alternative analyses measured the lagged variables only at the 3-year survey and at both the 1-and 3-year surveys; the results were similar to ones that are presented in table 5 and that use an average across the 1-and 3-year surveys.) Each model includes a measure of the lagged dependent variable, which is always positive and highly statistically significant. Also presented are standardized coefficients that allow for comparison of effect sizes across models and across different measures. The first two columns examine the effect of the listed fatherinvolvement measures, assessed at the prior waves, on hardship at the 5-year survey (the original direction of interest). The third and fourth columns present the effect of hardship at the prior waves on fathers' involvement at the 5-year survey (the reverse causal direction).
The estimates identify no statistically significant association between the measure of lagged days of contact and hardship in the year prior to the 5-year interview. So too, no such association is observed between the measure of lagged hardship and days of contact in the month prior to the 5-year interview, but the standardized coefficient is 1.5 times larger for the reverse causation path. Furthermore, in the panels for informal and formal cash support, lagged hardship is estimated to be negatively and statistically significantly associated with informal and formal child support payments made in the year prior to the 5-year interview.
Neither lagged formal child support nor lagged informal support is found to be associated with hardship in the year prior to the 5-year interview. The standardized coefficients from these models, which estimate the reverse causal direction, are more than twice the size of those in the other (hypothesized) direction (-0.05 vs. -0.02). In contrast, results from the inkind support panel suggest that such contributions reduce future hardship, although hardship is not found to predict future in-kind contributions. Taken together, these results suggest that causation is more likely to go from hardship to father involvement than from father involvement to hardship.
---
Summary and Conclusion
This article examines associations between different measures of fathers' financial and physical involvement with their nonresident children and material hardship in the mother's household. It takes advantage of longitudinal data that include multiple observations on each family over several waves of data. Estimates from cross-sectional pooled models suggest that fathers' formal cash support, informal cash support, and contact with their children respectively reduce the number of hardships reported by sampled mothers. These results persist in models that control for other types of involvement and for an extensive set of covariates. The estimated effects of father-child contact are more consistently robust in models with lagged dependent variables and individual fixed effects than are results involving cash supports. This finding suggests that fathers who are involved with their children may differ in unobserved ways from fathers who are not involved, and such differences may drive the results for formal and informal cash child support. The robustness of results for the association of father-child contact and hardship within both the models with lagged dependent variables and those with individual fixed effects suggests that this association is not driven by unobserved heterogeneity. These results are consistent with those of Garasky and Stewart (2007), who find that the effect of father visits is stronger than that of child support payments in reducing food insufficiency.
An examination of the hypothesis that the associations might be due to reverse causation identifies stronger evidence that hardship decreases future father involvement than that father involvement decreases future hardship. As the preceding discussion notes, there are good reasons to believe that hardship diminishes levels of father involvement. If mothers have their phone service turned off, are evicted, or have to move to a shelter, fathers may find it difficult to visit and to contribute to their children (particularly through informal cash or in-kind contributions). The fixed-effects models looking at individual measures of hardship potentially point to this explanation. In those models, the strongest negative coefficients for days of father-child contact, informal cash support, and formal cash support are found for their association with eviction and staying in a place not meant for living. A related and more general explanation is that fathers' visits may decline when the mother experiences hardship. Arranging visits requires time and coordination on the mother's part. Her ability to make such arrangements may be impeded by experience of hardship.
Finally, this study finds that in-kind support is positively and statistically significantly associated with contemporaneous hardship in fixed-effects models and negatively associated with future hardship. Together, the in-kind results suggest a process that begins with reverse causation: the mother experiences hardship, and the father comes to her aid. This process ends with the originally hypothesized causal path: father involvement reduces future hardship for the mother and child.
In short, the results examining the relations between nonresident father involvement and material hardship in Fragile Families are far more complex than previously imagined. For a given family, causation very likely goes both ways, albeit at different times. Future research should focus on heterogeneity within the population. The recent work of Jacob Cheadle, Paul Amato, and Valarie King (2010) uncovers a number of different patterns of involvement among nonresident fathers. It is an important example of the type of research that is necessary. This article has a number of limitations that point the way for further research in this area. First, as mentioned previously, getting the temporal ordering of events is crucial when trying to understand how fathers' money and time spent with children affect economic well-being in the children's homes. Figuring out the appropriate lags and identifying data that measure these time periods is crucial. Related to the issue of temporal ordering is the possibility that time-varying unobserved characteristics may drive the results. If unemployment increases, fathers' involvement and mothers' hardship may be affected. Although both the fixed effects and cross-lagged models include the unemployment rate at the time of the 5-year survey, it is not clear if that is the appropriate time period or whether unemployment should be lagged. If unemployment should be lagged, it also is not clear how long the lag should be. Future research should also consider these questions.
Second, this study cannot rule out the possibility of measurement error in the indicator of inkind support. Mothers are asked about fathers' provision of food, clothes, toys, medicine, and other items. It is hard to know how the mother would classify a situation in which the father takes the child to the doctor or pays the mother's electric bill. It is very likely that when a mother anticipates financial hardship, she calls the father and he provides assistance. However, some of these types of contributions are not picked up in the measure of in-kind support. Future surveys should focus attention on improving assessment of fathers' noncash contributions to children.
Third, it is possible that the amount of support provided by fathers is measured with greater error than the amount of fathers' contact with children. If this is the case, the estimates of the effects of support provided are less precise than the estimates of the effects of contact.
This study also has a number of strengths. First, the study finds that other characteristics of mothers, such as mental health (proxied with grandmother's mental health), impulsivity, and access to social support, are very strongly related to hardship. Those findings confirm results from previous research and reaffirm the need to include such types of variables in studies of family economic circumstances. Second, although the findings may provide more questions than answers, they clearly indicate that it is essential to look at these relations through a longitudinal lens. Results from prior research may be biased because they fail to consider the effects of fathers' involvement over time.
Material hardship-such as food insufficiency, homelessness, utility shutoffs, and unmet medical needs-is known to be detrimental to children's health and well-being, over and above the effects of household income or poverty. In addition, these conditions are present in many households with incomes well above poverty thresholds. Children living in singleparent families are particularly at risk for hardship, especially those children born to unmarried parents. It is important to understand how nonresident fathers, through their payment of child support and time spent with children, can improve their children's lives. It is also important to understand how material hardship in the mother and child's household may disrupt father involvement. The results from this research point to the strong possibility that all types of father involvement are important for children, but the findings also underscore the difficulty of making causal statements in this type of research. Finally, these results highlight the gaps in knowledge and the need for further research in this area. | 69,544 | 2,876 |
4a1d3c1028add798344d49c0252e403926b65201 | WORKFORCE DIVERSITY AND EMPLOYEE PERFORMANCE IN THE CONSTITUTIONAL COMMISSIONS OF KENYA | 2,020 | [
"JournalArticle"
] | The study's objective was to establish the influence of workforce diversity on employee performance in constitutional commissions of Kenya. Specifically the study sought to determine the influence of gender diversity and age diversity on employee performance in constitutional commissions of Kenya. The study was guided by social identification and categorization theory, similarity/attraction theory. The study adopted a descriptive cross-sectional survey. Targeted population was 15 Kenyan Constitution Commission. The population of the study was staff members in the headquarters of the organization which was a total of 623 employees at managerial level. The sample of 244 members was used in the study and they were selected using Stratified random sampling method. Questionnaire was selected as data collection too where the researcher administered them to the entire sample selected. The study conducted pilot study to enable Validation and pretesting. The data gathered was analysed using SPSS version 23. The study analysed the data using descriptive and inferential statistics. Descriptive statistics were used in analysing quantitative data and the findings presented in tables, figures and graphs and in prose form. The study found that gender diversity positively and significantly affects performance of staff members in Kenyan constitutional commissions; diverse age positively and significantly affect performance of staff members in Kenyan constitutional commissions. Therefore, when employing staff, it is important to ensure that they are diverse; this will encourage their improved performance. Equal promotion of employees is important because it motivates employees to be dedicated to their work. It is also important for the organization to provide favorable environment and working conditions for employees depending on their age. The organization should increase diversity and use work groups to maximally utilizing their great participation and synergy in order to boost employee and organizational performance. The organization should ensure that there is education diversity among its employees, both management employees and juniors. | INTRODUCTION
Worldwide, diversity of employees has become an issue of interest both at work and in the market. For any company that wants to be more dynamic and profitable should have views that have no boarders and should also assure the employees of diversity in daily running of the business and all the activities involved in everyday of the business (Childs & Losey, 2015). Globally, companies are trying to adjust themselves such that employees who have different backgrounds are able to acquire the right skills and also be supported to ensure that they are able to implement the corporate strategies (Ramirez, 2016).
Proof of inclusion as a strategy of diversity in U.S. from Human Resource Institute, the establishments of a survey of the year 2001 conducted on a thousand privately and publicly owned organizations established that 56% offered diversity training on race, sixty eight on gender, forty five on ethnicity, thirty five on age, fifty four on disability, fifty seven on sexual orientation, and twenty four on religion (Kelly, Ramirez & Brady, 2016). The performance index of the company rose by seven percent with the private sector taking the bigger share of five percent. The reason why the public sector had a low performance index is because they are reluctant in integrating diversity in the management systems .
According to Christian, Porter and Moffitt (2016), the minority workforce in the United States is expected to rise from 16.5% in 2000 to an estimated 25% in 2050. When the Review of Public Personnel Administration (ROPPA) was first published in 1980, White males accounted for 86% of all Senior Executive Service (SES) employees in the U.S. federal government. By 2008, that number had decreased to 65%. In addition to more racial/ethnic globalization has led to increases in cultural and linguistic diversity as well. About 18% of all households in the United States use a language other than English, and about 13% of U.S. residents were born in a different country (Rubaii-Barrett & Wise, 2018).
Because of the apartheid system through which the policies on equity were added in the constitution in the year 1998, it has enabled SA to be the leading country in Africa that has embraced diversity. Although they have advanced much in democracy, the employees are still faced with discrimination and being treated unequally. The main indicator of preserve inequality in the system is the failure of black people being represented in the top positions in public institutions and also women are not represented and the disabled are almost totally unrepresented (Nel, Gerber, Van Dyk, Haasbroek, Schultz, Sono & Werner, 2017).
According to Cross-Cultural Foundation of Uganda (2017), ethnic, political and religious diversity is posing a threat to diversity management in public organizations in Uganda. Diversity is manifested and perceived as a challenge to the workforce management; pluralism enhanced by environmental changes, individual and community initiatives, and intermarriages. The dilemma is how diversity can be integrated into the public organizations management fabric. There is also need to lobby for implementation of Equal Opportunities Act, diversity educational institutions, political parties and cultural institutions championing diversity management.
In regard to diversity of employees, about half of the population in Nigeria is in the age of working yet the rate of employment is around twelve percent. Because of the interaction of foreign and local cultures because of multinational operations as well as impacts of globalization, it has made diversity of employees a challenge and at the same time a resource. In Nigeria, the FirstBank has 61% of its employees being male and the remaining 39% being females, while at the managerial level 66% are male and 34% female and at the board level 84% are male and 16% are female. Currently, the FirstBank has only nine women in its subsidiaries boards (Waller, 2016).
With introduction of the new constitution, Kenya has introduced new demographic processes. The Kenyan constitution 2010 covers the issue of provision of equal opportunities in various areas such as the economic, cultural and social aspects (Namachanja, & Okibo, 2015). There are conventions in Kenya calling for inclusion of people from any societal context which include the appointments of the public sector. In the old dispensation there were no policies that allowed the some of the conventions and treaties to take effect. The effect was that there was disproportion in the public institutions in terms of the disabled individuals, gender and ethnic. Lack of equality could be as a result of various aspects such as practices, laws and policies that favored discrimination (Waiganjo et al., 2016).
The inequalities were addressed by the 2010 Constitution under Articles 10 and 232 on values of the nation and principles of governing. The article emphasizes on strong identity in the nation; leadership as well as representation that is effective; equal opportunities and resources to all; development that is sustainable; governance that is good; and protecting of the vulnerable individuals and the marginalized. It is therefore the responsibility of the management of these public institutions to ensure that their staff members represent all the citizens professionally, academically, in terms of gender, age, disability, minority, race, ethnicity, etc.
In Article 232 the constitution affords that the different communities in Kenya should be represented in the public service. Further, in Article 10 public organizations are required to ensure inclusiveness, protection of marginalized and vulnerable groups and non-discrimination. The constitution is specific on Articles 54-57 on individuals qualified for special rights of application, they include; society old members, children, disabled individuals, the youth, marginalized and minority groups.
To make sure there is representation in the public service, the constitution provides for use special techniques and affirmative action so as to promote equal employment opportunities. This can be found in Article 27 4(d) which emphasizes on non-discrimination on the other hand 27(6) provides that the government should take affirmative action addressing the challenges faced by people who might have faced discrimination in some point in their life. The appointments on people with disabilities are indicated in Article 54(2) where 5% of the employment should consider these people. The issues about youth employment are found in Article 55. Affirmative action on marginalized groups and minorities employment is emphasized in article 56(c).
The commission of National Gender and Equality was established by the 2011 Act, its roles include inter-alia, equality promotion, non-discrimination and mainstreaming gender issues, people with disabilities and marginalized individuals in national development. The Ethics Act provides for a business environment that supports diversity. Public officers are required to discharge their duties professionally and respect their colleagues in the public service. The 2015 Act focuses on values and principles. Public organization are required to ensure that both male and female, disabled persons and various ethnic groups form part of the employees in the public organizations.
According to KNBS (2015) the public sector has approximately 700, 000 employees, from various races and ethnic groups, marginalized persons, people with disabilities and minorities. PSC survey (2013/14) revealed that the requirements of the constitution on two third rules on gender have not been implemented fully. About ethnic composition, PSC surveys have revealed that there are communities which are highly represented and others underrepresented more so from marginalized regions. Moreover, people with disabilities representation is also low (1%). This study sought to establish the influence of workforce diversity on employee performance in constitutional commissions of Kenya.
---
Statement of the Problem
Based on the report provided by Quality Assessment and performance improvement strategy (2016) it was established that Kenyan Constitution Commission witnesses low levels of performance of their staff members which resulted to reduced levels of employee's satisfaction by 8% for the period of 2015-2016. The unsatisfactory performance was attributed to employees inability to meet deadlines and poorly done tasks due to hiring of employees who are not qualified. The recommendation of the report regarding improvement of performance and level of production it also suggested that the commission should overhaul its practices of HR mainly regarding training of employees in new technology, empowerment of youth and eliminating discrimination, biasness and favors at work environment.
According to NCIC (2016) report on audit it was established that the commission displayed inequality in race and ethnically. From the report it was established that out of 42 tribes in the country, only 10% take around 88% and twenty tribes combined do not constitute even 1% of the entire workforce. This implies that the public resources like salaries only benefit few communities which greatly affects the growth of the country and also affects the unity of the country and also a key cause of unfair delivery of services, (NCIC, 2016).
Various studies (Dessler, 2016: Bekele, 2015;Nyambegera, 2017;Barlow et al., 2016) have focused on various aspects of diverse workforce diversity and furthermore they appreciate the issue of staff performance and the rate of nonperformance of organizations that is alarming because of diverse workforce. The studies were conducted in different contexts and nations. This study sought to fill the research gap by establishing the influence of workforce diversity on employee performance in constitutional commissions of Kenya.
---
Objectives of the study
The general objective of this study was to establish the influence of workforce diversity on employee performance in constitutional commissions of Kenya. The study was guided by the following objectives To determine the influence of gender diversity on employee performance in constitutional commissions of Kenya To analyze the influence of age diversity on employee performance in constitutional commissions of Kenya
---
Research Hypotheses
The study was guided by the following questions H A1 Gender diversity has a positive significant influence on employee performance in constitutional commissions of Kenya H A2 Age diversity has a positive significant influence on employee performance in constitutional commissions of Kenya
---
LITERATURE REVIEW
---
Theoretical Framework
The discipline of workforce diversity in its effort to streamline the interactions of diverse workforce and annex its potential in organizations has borrowed a number of theories. The study was guided by social identification and categorization theory, similarity/attraction theory.
---
Social Identification and Categorization Theory
Diverse social category is the variation in the membership of social category. T can be due to differences in gender of the members or their age or even ethnicity, (Jackson, 1992). Due to the difference that exists in the groups, it can lead to reduced cohesiveness in the group or low levels of satisfaction among the members. Failure in the management of the differences there will arise conflict of relations and it has a negative impact on performance, (Williams & O'Reilly, 1998); Tjosvold et al., 2004). Based on this theory, people develop personal identity based on part of the categories to which they themselves belong (Hogg, Terry & White, 1995). Individuals tend to group themselves to those other members of the group that they share the same behaviours, attitude and attributes. Self-categorization is the term that is used in describing the process where an individual sees themselves as being part of a group (Kulik & Bainbridge, 2006). This theory implies that if the perceiver has a new target, comparison is done between the individual and the new target. People opt to find other groups when they discover that the group they targeted is different from what they perceived. It's a common thing for people to make comparison between themselves and other groups (Ashforth & Humphrey, 1995). The main aspects that are used in making comparison are the age, race and gender because they are the main characters that the perceivers sees and uses in identifying themselves and therefore applies the same in categorizing other people. The impact of self-categorization and social identity is that it leads to prejudice, conflict and stereotype (Kulik & Bainbridge, 2006). This theory has been applied in making predictions and comprehending the way diversity affects the attitude of the people and the way the group behaves. In explaining the impacts of diversity on the results of a person, the main argument is that the visibility and the character affect the feeling of identification (Tsui, Egan & O'Reilly, 1992). In groups identification is mainly depends on demographics of the individuals and it relates with biasness inside the group and conflict in the group. Through the expansion of the theories explaining the attitude of individuals and their traits, studies conducted on diversity have established that decisions made on diversity have a high likelihood of influencing the social activities in a group and the institution as well (Jehn, Northcraft & Neal, 1999;Pelled, Eisenhardt & Xin, 1999). Despite the fact that the theories of social categorization and social identity were created with the aim of explaining the impact of diversity that has been identified, some of the scholars have applied these theories in explaining the impact of personal diversity and value-based (Thomas, 1999).
Employment of individuals of different genders is important for an organization. This is because their interaction can create new knowledge hence improving their performance. The theory supports the variable of gender diversity by linking the social identification and categorization theory to employee performance in constitutional commissions of Kenya
---
Similarity/Attraction Theory
The foundation of this theory is the notion that homogeneity in demographics of people increases their chances of being attracted and like each other. People who are from the same background may find that they gave a lot in common compared with those from a different background; this makes it easy for them to work together and come up with products or solutions to problems. Having similarities boosts one's value and ideas while disagreements bring about the question of ones values and ideas and it's not settling. Studies done have established that in circumstances whereby people get the chance to interact with various individuals, there is a high likelihood that the person will select someone they share the same characters (Berman, et al, 2008;Cassel, 2001).
Researches done based on similarity/attraction concept established that lack of similarities led to less attraction among individuals manifesting through reduces communication, distorted information, and error in communication (Cameron & Quinn, 2002). Research based on this theory established that in organizations, there are high levels of diversity which have a high likelihood of leading to faulty work procedures. Faulty work will result to poor performance of workers. Individuals of different age groups have diverse knowledge. Therefore, incorporation of employees of diverse ages will promote the growth of employees and also improve the understanding of their tasks. The theory supports the variable of age diversity by linking the similarity/attraction theory to how employees perform in constitutional commissions of Kenya.
---
Independent Variables Dependent Variable
Figure 1: Conceptual Framework
In companies the gender based inequality are reinforce and justified by stereotyping and biasness describing positively characteristics which leads to higher preference given to male (Leonard & Levine, 2016;Nkomo, 2016). This means that the companies prefer male employees that the female because of the perception that they perform better and have more ability to manage their duties. Carrel (2016) stated that there is a significant amount of diversity of employees that is not effective if gender factors aren't recognized and managed. It was also indicated tin the study that the greatest challenge to overcome is the perception that women and men aren't equal. Kossek, Lobel, and Brown (2015) indicated that in the entire world, the population of men at their working age and are employed are 80% while that of women is only 54%. Further, the position that women have been given in the society relates with care giving and domestic duties.
Kochan, Bezrukova, Ely, Jackson, Joshi, Jehn, Leonard, Levine, and Thomas (2016), stated that it is very important for women to be provided with equal opportunities in a company because they are essential in the improvement of the performance of the company. The societal mandates did eliminate the policies that discriminate against some level of workers and for the companies that failed to implement the fair employment opportunities were faced with increased costs. Because of discrimination practices by organization, the organizations are forced to hire employees who are paid much higher compared to alternative and they are not very productive (Barrington & Troke, 2017). Moreover, Wentling and Palma Rivas (2015) indicated that companies that have employees who are diversified will provide better services because they understand their clients better (Kundu, 2016) Armstrong (2015) indicated that performance is determined by behaviour as well as outcome. The performer is the one ho displays their behaviour and changes the behaviour to action. Behaviours are results in their own way, it is the result of mental and physical effort directed towards a particular task. The performance of a worker is the combination of the actual outcome measures in reference to the intended goal. Kenney, ( 2016), stated that the way a staff member performs is determined based on the standards that are set by the company.
Employees of any company have some things they expect from the company as a result of their performance. The employees are said to be good performs if they meet what the company expects of them and attain the goals of the company and the set standards. It implies that management that is effective and staff members task administration provide a reflection of the quality that is needed by the company and can be said to performance. Dessler (2017), stated that the performance of a staff members is the behavior that can be measured and that is relevant towards the achievement of the goals of the company. The performance of staff members in more than the personal factors but rather it includes external factors like the environment of in the office and motivation. The way they perform is measures mainly based on 4 factors; quality, dependability, quantity and work knowledge, (Mazin, 2015).
As per Cole (2018) the performance of staff members is determined based on the standards that the company sets. Performance refers to achieving specific task that is measured against standards that have been determined already in terms of cost and speed, level of accuracy and completeness. Apiah et al, (2015) indicated that during the review of work performance that is when the performance of staff members is determined. Contextual performance is the activities that do not add to the main agenda of the company but supports the company in its social and psychological environment through which the goals of the company are pursued (Lovell, 2017). The Contextual performance is determined using other variables of an individual. They are inclusive of behaviours establishing the social of the organization and psychological context and assist staff members to carry out their main technical activities (Buchman et al, 2016).
---
METHODOLOGY
Research design adopted was descriptive crosssectional survey. Cooper and Schindler (2008) indicated that this type of studies is done one time. This kind of study helps the researcher in determining if at any particular time the variables are significantly related (Mugenda & Mugenda, 2008).
For this study the targeted population was 15 Kenyan Constitution Commission staff at the headquarters in Nairobi. The population for the study was 623 managerial level employees who were working at Constitutional commissions head offices. The reason why managerial level employees were selected was because they had the information that was needed in this study.
The study used the Krejcie and Morgan (1970) formula to determine the size of the sample. This study used stratified random sampling technique in selecting the sample. Questionnaire was used as the main tool for gathering data. The study adopted the Mixed methods data analysis method where inferential and descriptive analysis were performed. Both quantitative and qualitative data was collected. Quantitative data collected was analysed using descriptive statistics techniques. Content analysis was used to analyze qualitative data.
Before the data was analysed, coding, cleaning and grouping of the data was done as per their variables.
Pearson R correlation was used to measure strength and the direction of linear relationship between variables. Multiple regression models were fitted to the data in order to determine how the predictor variables affect the response variable. This study used a multiple regression model to measure the influence of workforce diversity on employee performance in constitutional commissions of Kenya. To determine any causal relationship, multiple linear regression analysis was conducted.
The overall model was Y= βo + β 1 X 1 + β 2 X 2 + ε Y = Employee performance X 1 = Gender Diversity X 2 = Age Diversity β 1, β 2, are regression coefficients to be estimated = Error term β = the beta coefficients of independent variables
---
RESULTS AND DISCUSSIONS
The study selected a sample of 244 managerial level employees who were working at Constitutional Commissions head offices. All selected respondents were issued with questionnaires for data collection but the researcher was able to receive back only 217 questionnaires. The returned questionnaires formed a response rate of 88.9% as indicated in Therefore, since our response rate was above 70% it was considered to be excellent and was used for further analysis and reporting.
---
Descriptive Results
In this section the study presents findings on Likert scale questions where respondents were asked to indicate their level of agreement or disagreement with various statements that relate with the influence of workforce diversity on employee performance.
---
Gender Diversity
This study investigated whether there is a relationship of between gender diversity and employee performance. The findings presented in Table 1 showed that majority respondents agreed with various statements that relate with gender diversity. Regarding employment, 80.2% respondents were in agreement that the organization employs both genders (M=3.982); 80.6% agreed that when it comes to employee treatment, they are all treated fairly irrespective of their gender (M=3.889); 75.6% that both male and female employees are given the opportunity to show their potential (M=3.777). On training, 75.1% respondents agreed that both genders take part in decision-making (M=3.948); 77.4% agreed that the company encourages career development which involves all employees (M=3.738); and 77.4% that programs for training and development are created in a way that they fulfill the needs of both (M=3.698).
With regard to promotion, 78.8% respondents were in agreement that the organization provides female employees with opportunities to grow (M=3.915); 72.8% agreed that both gender have an equal chance of being promoted (M=3.863); and 72.8% that promotion is a fair process in the organization (M=3.836). On gender evaluation, the study found that 80.6% respondents were in agreement that the organization has an employee evaluation system used to evaluate both genders (M=3.714); 75.6% agreed that performance evaluation of both genders is reviewed against set performance standards (M=3.751); and 80.6% that the organization provides feedback after and evaluation process (M=3.856). The study further established on fair treatment that 75.1% respondents agreed that in the organization the rules and regulations apply to employees of both gender (M=3.915); 77.4% agreed that each employee is recognized and rewarded for their accomplishments (M=3.699); and 77.4% that the organization treats employees as equals (M=3.678).
Respondents also indicated other ways in which gender diversity affect employee performance in constitutional commissions of Kenya. They explained that when there is gender equality in the organization and equal opportunities for promotions of employees irrespective of their gender, they are motivated even more to put more efforts in their work. Diversification also in organizations, allows provision of better services because they get to understand their clients even better. Advantage of gender diversity is contingent on areas like the strategy of the company, culture, the environment and the people and the company.
The study findings concurred with Naqvi, Ishtiaq, Kanwal, Butt, and Nawaz, (2016) that expansion in gender diversity in a group prompts inventiveness and development. They added that the process of making decisions turns out to be better and the final product is improved, boosting the performance of the group. It also agrees with Hoogendoorn, Oosterbeek and Praag (2013) who established that the group whose members were equally mixed in terms of their gender performed better in terms of sales and profitability compared to the groups that were dominated by male.
---
Gender Evaluation
The organization has an employee evaluation system used to evaluate both genders 5.5 5.5 7.4 74.2 7.4 3.714 1.251 Performance evaluation of both genders is reviewed against set performance standards 4.6 7.4 4.6 75.6 7.8 3.751 1.277
The organization provides feedback after and evaluation process 2.8 6.0 2.8 80.6 7.8 3.856 1.384
---
Fair Treatment
The organization treats employees as equals 6.0 8.8 2.8 77.4 5.5 3.678 1.325 Each employee is recognized and rewarded for their accomplishments 1.8 5.1 14.7 77.4 0.9 3.699 1.331
In the organization the rules and regulations apply to employees of both gender 2.8 6.0 2.8 75.1 13.8 3.915 1.267
---
Age Diversity
This study was concerned with investigation of whether there is a relationship between age diversity and employee performance in constitutional commissions of Kenya.
From the results presented in Table 2, respondents agreed with various statements relating with age diversity. Regarding generation X, 85.3% of respondents agreed that baby boomers work to achieve organizational goals (M=3.994); 85.3% agreed that the organization employs individuals from generation X (M=3.961); and 87.6% that generation X work independently with minimal supervision (M=3.856). On generation Y, 78.8% respondents agreed that generation Y highly focus on developing their career (M=3.994); 88.9% that the organization employs individuals from generation Y (M=3.955); and 82.9% that generation Y prefer working as a team to achieve organization goals (M=3.836).
Regarding Generation Z, 85.3% respondents agreed that generation Z collaborate with other organization members to achieve organizational goals (M=3.988); 94.5% agreed that generation Z is motivated by social rewards, mentorship, and constant feedback (M=3.961); and 83.4% that the organization employs individuals from generation Z (M=3.830). On equitable workplace, 78.8% respondents agreed that training in the organization is inclusive of diverse ages (M=3.935); 78.3% that the organization gives employee of different age groups equal opportunities (M=3.803) and 72.8% that equal opportunity brings together employees with diverse exposures (M=3.744). On inclusion of ages, 85.3% respondents agreed that in the organization promotion is inclusive of diverse age (M=3.994); 85.3% that a gender diverse team produces high quality decisions over a homogeneous team (M=3.961); and 87.6% that a gender diverse team enhance the organization's overall creativity and innovation (M=3.889).
Respondents gave other ways through which age diversity affect employee performance in constitutional commissions of Kenya. Respondents indicated that there are those employees who are older and therefore have more experience and expertise and therefore assist the younger generation which in turn enhances performance. Others were of the opinion that age difference makes it challenging to work with others because of their interests and how they like to perform their tasks and therefore becomes challenging to work with.
The findings of the study disagrees with Kunze, Boehm and Bruch (2017) that age diversity is by all accounts identified with the rise of an age discrimination atmosphere in organizations, which adversely impacts how the company performs through the intercession of emotional responsibility. The study also agrees with Joseph (2018) that age groups of workers and their performance were negatively correlated.
---
Employee Performance
This study was concerned with investigation of employee performance in constitutional commissions of Kenya.
The findings presented in Table 3 showed that 74.2% respondents agreed that over the past five years, performance of employees had improved (M=4.021); 69.6% that age diversity in organizations has improved employee performance (M=3.988); 73.7% that highly performing workers get promotions easily in a company than lower performers (M=3.902); 73.7% that education diversity in the organization has helped to improve performance in the organization (M=3.902); 77.4% that social diversity has improved levels of employee performance in their organization (M=3.836); 69.1% that the company rewards employees for their good performance (M=3.810); and 70.5% that gender diversity in their organization has resulted to improved performance among employees (M=3.738).
The study findings concurred with Sabwami (2018) that low performance and not accomplishing the set objectives may be experienced as disappointing or even as an individual disappointment and that highly performing workers get promotions easily in a company than lower performers. The company rewards employees for their good performance 4.6 5.1 7.8 69.1 13.4 3.810 1.142
Gender diversity in our organization has resulted to improved performance among employees 6.5 3.2 9.7 70.5 10.1 3.738 1.168
---
Inferential Results
Relationship between study variables was determined by computing inferential statistics. The study computed correlation and regression analysis.
---
Correlation Results
Pearson R correlation wad used to measure strength and the direction of linear relationship between variables. The association was considered to be: small if ±0.1 <r< ±0.29; medium if ±0.3 <r< ±0.49; and strong if r> ±0.5. The findings presented in Table 4 showed that gender diversity had a strong positive and significant relationship with performance of employees in constitutional commissions in Kenya (r=0.793, p=0.000); age diversity was found to have strong positive significant relationship with performance of employees in constitutional commissions in Kenya (r=0.743, p=0.000). Based on these findings it can be seen that all the variables (gender diversity, age diversity) had significant relationship with performance of employees in constitutional commissions in Kenya.
---
Multiple Regression Analysis
Multiple regression models were fitted to the data in order to determine how the predictor variables affect the response variable. This study used a multiple regression model to measure the influence of workforce diversity on employee performance in constitutional commissions of Kenya. It was also used to test research hypothesis 1-2.
---
Model Summary
A model summary is used to show the amount of variation in the dependent variable that can be explained by changes in the independent variables. From the findings presented in
---
Analysis of Variance
Analysis of variance is used to test the significance of the model. The significance of both models, unmoderated and the moderated regression models were tested at 5% level of significance. For the unmoderated regress model, model 1, the significance of the model was 0.000 which is less than the selected level of significance 0.05. This therefore suggests that the model was significant. The findings further show that the F-calculated value (21.515) was greater than the F-critical value (F 5,211 =2.257); this suggests that the variables, age diversity, gender diversity can be used to predict employee performance in state corporations in Kenya.
---
Beta Coefficients of the Study Variables
The beta values that were developed were used to fit regression equations; the moderated and the unmoderated. For the regression equations fitted, Y = Employee performance; X 1 = Gender Diversity; X 2 = Age Diversity. The findings were also used to test the hypothesis of the study.
From the findings of the first model, model 1, the following regression equation was fitted; Y= 0.920 + 0.388X 1 + 0.784X 2 From the equation above, it can be observed that when the rest of the variables (age diversity, gender diversity) are held to a constant zero, employee performance in state corporations in Kenya will be at a constant value of 0.920.
The first hypothesis of the study was: H A1 Gender diversity positively and significantly affects performance of staff members in Kenyan constitutional commissions
The findings also show that gender diversity has significant influence on employee performance in state corporations in Kenya (p=0.029<0.05). The findings also show that the influence of gender diversity on employee performance is positive (β=0.388). These findings suggest we accept the null hypothesis H A1 and conclude that gender diversity positively and significantly affects performance of staff members in Kenyan constitutional commissions. . The study findings agree with Hoogendoorn, Oosterbeek and Praag (2016) that group whose members were equally mixed in terms of their gender performed better in terms of sales and profitability compared to the groups that were dominated by mal
The second hypothesis was: H A2 Diverse age positively and significantly affects performance of staff members in Kenyan constitutional commissions
The findings also show that age diversity has significant influence on employee performance in state corporations in Kenya (0.007<0.05). The findings also show that age diversity positively affects employee performance (β=0.784). These findings suggest we accept the null hypothesis H A2 and conclude that diverse age positively and significantly affect performance of staff members in Kenyan constitutional commissions. The study findings agree with Backes-Gellner and Veen (2017) who examined whether age diversity inside an organization's workforce influences organization efficiency and found that expanding age diversity positively affects organization efficiency if and just if an organization takes part in innovative as opposed to routine undertakings.
---
CONCLUSIONS AND RECOMMENDATIONS
The study concluded that gender diversity positively and significantly affects performance of staff members in Kenyan constitutional commissions. The study revealed that gender diversity has significant influence on employee performance in state corporations in Kenya. The influence was also found to be positive. Gender diversity had a strong positive correlation with performance in state corporations.
The study concluded that diverse age positively and significantly affects performance of staff members in Kenyan constitutional commissions. The conclusion was drawn from the findings that age diversity has significant influence on employee performance in state corporations in Kenya. The study also found that age diversity positively affects employee performance. Age diversity had a strong positive correlation with performance in state corporations.
There is need to ensure that gender diversity in the organization. When employing staff, it is important to ensure that they are diverse; this will encourage their improved performance. Equal promotion of employees is important because it motivates employees to be dedicated to their work. It is important for the organization to ensure that there is age diversity among employees. It is also important for the organization to provide favorable environment and working conditions for employees depending on their age. With age comes experience and also young individuals are more innovative and adopt fast to new technology. Depending on the objective of the organization, the organization should select employees of appropriate age to suit the position they have created. State Corporation in Kenya should ensure there is ethnic diversity in the organization; this will increase employee performance. The organization should increase diversity and use work groups to maximally utilizing their great participation and synergy in order to boost employee and organizational performance.
Policy makers in state corporations should set a strong example for diversity in the workplace by having policies that make management accountable for promoting inclusion. Hire managers based on their accomplishments and show the staff that gender, age and ethnic background have nothing to do with succeeding at the company.
The study also recommended policy makers to establish a diversity policy that includes a requirement that the board of directors; establish measurable objectives for achieving greater gender diversity; and assess annually both the measurable objectives for achieving gender diversity and the progress in achieving them. | 38,262 | 2,162 |
37505a60cd04b665b61f914e8126e600060f27f3 | Examining vulnerability and resilience in maternal, newborn and child health through a gender lens in low-income and middle-income countries: a scoping review | 2,022 | [
"JournalArticle",
"Review"
] | Introduction Gender lens application is pertinent in addressing inequities that underlie morbidity and mortality in vulnerable populations, including mothers and children. While gender inequities may result in greater vulnerabilities for mothers and children, synthesising evidence on the constraints and opportunities is a step in accelerating reduction in poor outcomes and building resilience in individuals and across communities and health systems. Methods We conducted a scoping review that examined vulnerability and resilience in maternal, newborn and child health (MNCH) through a gender lens to characterise gender roles, relationships and differences in maternal and child health. We conducted a comprehensive search of peer-reviewed and grey literature in popular scholarly databases, including PubMed, ScienceDirect, EBSCOhost and Google Scholar. We identified and analysed 17 published studies that met the inclusion criteria for key gendered themes in maternal and child health vulnerability and resilience in low-income and middleincome countries. Results Six key gendered dimensions of vulnerability and resilience emerged from our analysis: (1) restricted maternal access to financial and economic resources; (2) limited economic contribution of women as a result of motherhood; (3) social norms, ideologies, beliefs and perceptions inhibiting women's access to maternal healthcare services; (4) restricted maternal agency and contribution to reproductive decisions; (5) power dynamics and experience of intimate partner violence contributing to adverse health for women, children and their families; (6) partner emotional or affective support being crucial for maternal health and well-being prenatal and postnatal. Conclusion This review highlights six domains that merit attention in addressing maternal and child health vulnerabilities. Recognising and understanding the gendered dynamics of vulnerability and resilience can help develop meaningful strategies that will guide the design and implementation of MNCH programmes in low-income and middle-income countries. ⇒ Socioeconomic inequalities place women and girls in precarious positions that adversely affect their vulnerability and resilience to health shocks. ⇒ Research on the gendered dimension of maternal and child health vulnerability and resilience is needed to fully evaluate how gender expectations may result in greater vulnerability for mothers, newborns and children or impact their resilience.⇒ This study provides new evidence on the gender dynamics of vulnerability and resilience in maternal, newborn and child health (MNCH) and how this impacts health outcomes. | INTRODUCTION
Maternal and childhood mortality remains key health challenges in several low-income and middle-income countries (LMICs). In 2019, approximately 5.2 million children died before their fifth birthday, more than 80% of these deaths occurred in sub-Saharan Africa and Central and South Asia. 1 Maternal mortality in sub-Saharan Africa and South Asia bore 86% of the estimated global burden in 2017. 2 Sub-Saharan Africa's maternal
---
WHAT IS ALREADY KNOWN ON THIS TOPIC?
BMJ Global Health mortality ratio of 546 per 100 000 live births is estimated to be the highest globally for any region. 3 In Maternal, Newborn and Child Health (MNCH), a vulnerable pregnant woman was defined as a woman who is threatened by physical, psychological, cognitive and/or social risk factors in combination with lack of adequate support and/or adequate coping skills. 4 On the other hand, resilience has been described as the capability of the public health and healthcare systems, communities, and individuals to prevent, protect against, quickly respond to and recover from health emergencies, particularly those whose scale, timing or unpredictability threatens to overwhelm routine capabilities. 5 Thus in MNCH, vulnerability and resilience are two divergent terms that tend to complement each other by acting as risk or protective factors, respectively, both at the individual level and at the health system. Pregnancy-related morbidity and mortality in LMICs are often preventable or treatable, but poverty, low maternal educational attainment and place of residence, among several other underlying factors, increase women's vulnerability to adverse maternal and child health outcomes. [6][7][8][9][10][11] Although multiple studies have examined these vulnerabilities, more attention needs to be paid to how they are patterned by gender to influence MNCH outcomes. Similarly, maternal resilience evidenced in women's ability to sustain life satisfaction, self-esteem and purpose amidst emotional, physical and financial difficulties associated with mothering and caregiving has been studied extensively. 6 8 10-12 However, there has been limited focus on how gender roles and norms may shape these factors. 12 Institutionalised power, social, political and economic advantages and disadvantages afforded to different genders influence power relations. Gender also intersects with other social determinants of health, including social class, race and ethnicity, 13 determines the hierarchy of social structure and power dynamics, and influences health outcomes. Health inequalities conditioned by gender are likely to put vulnerable populations at a further disadvantage. 14 Today, there is an increasing need for a critical and systematic assessment of the effect of gender norms, and gender inequality on the constraints faced by and opportunities available to vulnerable populations regarding MNCH. Theoretical and conceptual advances in global health have highlighted the importance of gender expectations, roles and relations in health promotion interventions. [15][16][17] For example, different gender expectations may result in greater vulnerability to mothers and children. Promising gender-sensitive practices in health have also emerged to address the HIV/ AIDS epidemic and influence maternal and child health outcomes. [18][19][20][21] The Sustainable Development Goals (SDG) aims to reduce maternal deaths to less than 70 per 100 000 live births by 2030 (SDG 3.1), neonatal mortality rate to at least as low as 12 per 1000 live births and under-5 mortality rate to at least as low as 25 per 1000 live births (SDG 3.2). These maternal and child health targets may be impossible to achieve if the critical factors shaping maternal and child health vulnerability and resilience are not well articulated. The SDG agenda must operate with gender as a cross-cutting aspect and therefore integrated within design, resource allocation, implementation, measurement and evaluation. Specifically, understanding how health systems respond to critical factors that shape the health and well-being of mothers, children and newborn is necessary. 22 This scoping review illuminates how gender differences and relations are relevant in providing important insight into how power structures and roles aggravate vulnerability or strengthen resilience in maternal and child health in LMICs. It provides new evidence on gendered dynamics in MNCH research that must be considered as we strive to programme interventions aimed at achieving the SDG targets on maternal and child health.
---
METHODS
We conducted a scoping review in accordance with Arksey and O'Malley's framework to examine the gendered dimension of vulnerability and resilience in MNCH in LMICs. 23 24 A scoping review was necessary for a broad and comprehensive analysis without consideration of publication quality. The review followed five stages: (1) identifying the research question; (2) identifying the relevant studies; (3) selecting the studies; (4) charting data and (5) collating, summarising and reporting results.
---
Identification of relevant peer-reviewed literature
This gender analysis was based on a larger scoping review aimed at developing a framework for vulnerability and resilience in MNCH in LMICs. The initial pool of literature was retrieved from major databases (ie, Medline, Embase, Scopus and Web of Science) based on a comprehensive and exhaustive search strategy that included appropriate keywords (see online supplemental appendix S1). This was supplemented by a grey literature search. The initial search was conducted on 15 January 2021 and updated on 1 March 2021.
The search strategy was structured around three blocks: (1) population (ie, MNCH, health outcomes, healthcare utilisation and social capital), (2) exposure (ie, vulnerability, resilience and high-risk) and (3) setting (ie, lowincome and middle-income settings). Critical keywords and thesaurus heading terms were initially tailored to Medline and Embase searches and then adapted in other sources as necessary. Online supplemental appendix S1 shows the full search strategies for Medline and Embase.
We also reviewed reports and technical papers from multilateral and bilateral organisations, foundations, international and local non-governmental organisations, such as the Bill & Melinda Gates Foundation, Jhpiego, Clinton Health Access Initiative, International Centre for Research on Women, Women's Health and Action Research Centre, Gender Watch and pharmacies. To gather as much evidence as possible, including
---
BMJ Global Health
high-quality literature regarding vulnerable populations in MNCH beyond the traditional sources, we incorporated the research from grey literature into this scoping review. We supplemented the database search with a bibliography search of key articles but found no relevant articles beyond what had already been extracted. We did not apply language restrictions in our search parameters and, thus, engaged translators to translate non-English publications.
---
Study selection
We developed and validated a high-performance machine learning classifier/algorithm (bidirectional encoder representations from transformers) to identify relevant studies focusing on vulnerability and resilience in MNCH from an initial pool of search results. Previous studies have reported the high predictive ability of machine learning models in title and abstract screening. [25][26][27] To train the machine learning algorithm, we randomly selected, screened and annotated the titles and abstracts of 500 records from the database. The performance of the model was evaluated against our classification based on precision, recall, specificity and accuracy scores. Subsequently, we applied the algorithm to review the abstracts and titles of the remaining publications to generate predictions to include or exclude them.
Covidence, an online systematic review software, was used to manage the search outputs and screening of eligible studies (https://www.covidence.org/). Two researchers screened the identified manuscripts retained from machine-learning predictions using Covidence. A third researcher reviewed and resolved all conflicts. Titles and abstracts were first screened before a full-text review for possible inclusion in the study. We included studies based on four key criteria. First, if they focus on women (pregnant/lactating and teenage mothers) and/ or children (male and female) under 5 years. Second, if they focused on LMICs. We also included studies that focused on vulnerability, frailty or high risk and resilience in LMICs. Lastly, we included all study types including peer-reviewed publications, programmatic reports, and conference abstracts. There were no language restrictions nor exclusions based on the year of publication.
---
Charting data
To provide a holistic gender analysis, we adapted a conceptual framework for gender analysis in health systems research by Morgan et al. 28 The framework unifies several other frameworks focusing on health, health systems and development. 28 More importantly, the framework's unique focus on how power is constituted and negotiated makes it a valuable resource for understanding gender in terms of power relation and a source of disparity in health systems. The framework had five focal areas, namely, access to resources, division of labour, social norms, rules and decision-making, power negotiation, and structure/environment. All the articles that met the inclusion criteria for this study were further screened based on these five key gender dimensions.
Relevant data were extracted into a data collection template developed on AirTable. Articles were screened and extracted if they fit any of the five dimensions of gender and power identified in the framework. We extracted the publication metadata (ie, name of the first author, year of publication, publication title and publication country) and additional data (eg, publication type, research design and methods, study context, indices of vulnerability and resilience, and key findings from the research). Categories for the focal areas were not mutually exclusive, which means that a study could belong and be counted in more than one category where evidence of such contributions exists. During the data analysis, we grouped the articles by their specific focus on the different dimensions of gender and power relations. Table 1 presents the details of the classifications.
---
Collating, synthesising and reporting the results
This review describes, first, the characteristics of the studies that meet the study inclusion criteria and, second, the findings. We report the summary statistics describing data collection methods, vulnerability/resilience context (eg, maternal or child/newborn health), gender dimension (eg, access to resources, division of labour, social norms, rules and decision making, power negotiation and structure/environment). We did not assess the quality or risk of bias for the included articles as the objective of this review was to scope and describe the breadth of gender dimensions in vulnerability or resilience in MNCH in LMICs. This review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) statement guidelines to enhance transparency in reporting scoping reviews. 29 Patient and public involvement statement Patients were not involved in the conduct of this study.
---
RESULTS
We identified 76 656 records through the database search (figure 1). We excluded 57 duplicate records and 73 638 abstracts that were flagged as potentially irrelevant to this study. Thereafter, we screened the remaining titles and abstracts (n=2871), we considered only 96 studies as relevant and selected them for full-text review. Of these, 79 studies did not meet the inclusion criteria and were excluded because of incorrect population, outcomes, setting, and study design or the lack of a gender focus in the analysis. Subsequently, 17 studies met our inclusion criteria for a promising gender-sensitive analysis of vulnerability and resilience in MNCH in LMICs. Online supplemental material S2 provides the details of these studies, including the year of publication, country of publication, context of the study, study design and key findings related to gender as regards vulnerability and resilience in MNCH.
---
BMJ Global Health
---
Study characteristics
A total of 17 studies met the inclusion criteria for a gender analysis. Out of these, 13 focused on maternal health and four on child health (figure 2). Eleven studies focused on sub-Saharan African countries (figure 2), of which three were from Kenya. Resilience was a more dominant focus: eight on maternal health and two on child health (online supplemental material S2).
Figure 3 presents the distribution of gender themes across maternal and child health contexts. Access to resources and decision making was the most common focus of the identified studies on both maternal (five) and child (three) health. Three studies examined power negotiation in relation to maternal (two) and child (one) health. Two studies also highlighted partner emotional or mental support in maternal health (two) and another two on the decision-making ability of mothers (two). Only a few studies examined how social norms and division of labour intersect with maternal health.
---
Access to resources
Access to resources emerged as the dominant genderfocused theme (8 of 17 studies). [30][31][32][33][34][35][36][37] In most studies, pregnant women or mothers lived in households characterised by low socioeconomic status and had lower levels of education, all of which are potentially related to poor access to maternal and child healthcare services. Among pregnant women living in a community of metropolitan Santiago, Chile, 31 low socioeconomic status was found to be related to deteriorating reproductive, maternal and neonatal health. Warren et al supported this finding and found that most women affected by fistula had secondary education as the highest level of education and a very low monthly income. 36 Most primary caretakers, including mothers, were not income earners and often relied heavily on their spouses or other household members for money.
Access to resources also emerged as an important barrier to child healthcare. For example, Johnson et al demonstrated that the classification as orphan and vulnerable children (OVC) directly and indirectly influenced the risk of childhood morbidity (eg, diarrhoea, fever and acute respiratory infection). 30 This is because OVCs were more likely to be found in households headed by adults (40 years old), where the mother/caregiver had inadequate access to socioeconomic resources, such as inadequate education, and in urban areas.
Many women were often in precarious positions, relying on their spouses for financial support to access healthcare services even during emergencies. A study in Kenya reported that irrespective of marital status, having male support (eg, husband, brother or uncle), particularly financial support and help in securing transport to hospitals for care, was critical. 36 Some women failed to attend clinics because of a lack of support from their husbands. Most husbands did not provide their wives with adequate funds for their needs during delivery. 32 The lack of rapid access to money was another important contributing factor to a child's deteriorating condition; it influenced the initiation of a treatment-seeking action, including where and by whom (all households) the action was performed. For instance, women in a study made many references to 'waiting to talk to my husband', 'waiting to be sent money from my husband,' and waiting for 'his permission to pursue an action.' 37 Such To what extent do women and men have the same access to education, information, income, employment and other resources that contribute to improvement in maternal, newborn, and child health? Do women have sufficient means to make decisions and access healthcare services without financial restrictions? Division of labour (Who does what?) Division of labour within and beyond the household and everyday practices How do women's social roles, such as childbearing, childcare, and infant feeding, affect their economic opportunities and access to health facilities? Social norms (How are values defined?) Social norms, ideologies, beliefs and perceptions How does stigma inhibit women's access to maternal healthcare services and are these available to unmarried women and teenage mothers? How do cultural norms about motherhood put women at risk of adverse health? Agency and decision making (Who decides?)
---
Agency and decision making (both formal and informal)
To what extent are women able to advocate for their health needs and contribute to household decisions that shape their and their children's health? Power negotiation (How is power enacted, negotiated or challenged?) Critical consciousness, acknowledgement/lack of acknowledgement, agency/ apathy, interests, historical and lived experiences, resistance or violence How is power enacted and negotiated in relation to maternal, newborn, and child health and how does power dynamics or women's experience of intimate partners contribute to adverse health for women, children and their families?
---
BMJ Global Health
gender-reinforced inequality in access to resources could subsequently affect the health-care-seeking behaviour of mothers and, ultimately, affect childcare, especially in the context of costly maternal healthcare services.
---
Division of labour
Only one study examined the dimension of the division of labour and how it intersects with maternal and child health. 38 This study included 36 Ugandan women who were admitted with obstetric near-miss and revealed that women's need to balance economic activities and reproduction often increased their vulnerability and ability to recover from obstetric complications. In such circumstances, social networks or social capital was generally perceived as an essential component of women's resilience because it provides women with financial, material, and emotional assistance, including those related to household responsibilities, such as childcare. 38
---
Social norms
One study examined the dimensions of social norms in maternal health. 39 It explored how values related to motherhood are defined and how this definition shapes or inhibits women's access to maternal healthcare services or places women at risk of adverse health. An in-depth case study of a woman from Burkina Faso suggested that structural impediments, including motherhood and childbearing, limit individual resilience. 39 This case study noted that the high level of social pressure on women to bear children as soon as possible, even when they are not physically or mentally capable, and the stigma associated with childlessness exacerbate maternal mortality and morbidity risks. 39 These conditions contributed to the death of women in the case study, who could not be rescued from dying from childbirth-related complications despite having access to skilled birth attendance and emergency obstetric care. 39
---
Agency and decision making
Two studies underscored the ability of women and mothers to make informed choices and contribute to decisions related to maternal and child healthcare. 40 41 For example, Prates et al showed women's inability to adequately plan the timing of childbirth because of poor socioeconomic status and inequalities in gender power, all of which contribute to multiparity. 41 More importantly, the existing power imbalance motivates male partner
---
BMJ Global Health
resistance to condom use as a means of family planning. 41 Additionally, Den Hollander et al in Ghana underscored women's low negotiating ability and autonomy in healthcare decision making. 40 The study reported wide power differences between health providers and women, especially in a context shaped by authority. Women were generally uninformed about their basic health information. A high level of therapeutic misconceptions was also observed in this study. Women were also reported to rely more often on a medical professional's opinion rather than being guided by their motivation. 40
---
Power negotiation
Power negotiation also emerged as a dominant gender dimension of vulnerability and resilience in maternal and child health. This dimension refers to how power is enacted and negotiated in relation to MNCH and how power dynamics or women's experience of intimate partner violence contributes to adverse health for women, children and their families. Our analysis found two studies that examined power negotiation in maternal health 42 43 and one in child health. 44 Although seropositive status disclosure is a crucial aspect of HIV programming, women living with HIV were generally reluctant in disclosing their HIV status to their partner to avoid negative reactions from the latter, including intimate partner physical violence. 44 Men were often not in favour of having their wives tested, fearing the indirect disclosure of their own infection. 44 Nonetheless, partner involvement is crucial for prevention of motherto-child transmission (PMTCT), especially because this might require mothers to use antiretroviral therapy and formula feeding for infants. The authors recommended couple counselling and partner involvement in PMTCT programmes, as only testing women can increase their susceptibility to violence despite careful counselling.
Furthermore, women's exposure to intimate partner violence could also affect other aspects of their health. For example, Vivilaki et al observed that the lack of or disappointment with partner support, poor marital
---
BMJ Global Health
relationship and emotional/physical abuse had been associated with high levels of postpartum anxiety and depression. 43 Likewise, McNaughton Reyes et al found that women exposed to intimate partner violence may be likely to experience persistent poor mental health across the antenatal and postnatal periods. 42
---
Partner emotional or affective support
The two studies on partner emotional or affective support were primarily related to maternal health. 45 46 Families and partners often reacted negatively by rejecting unwed pregnant teenagers or teenage mothers. 46 These rejections were expressed differently, including avoiding pregnant teenagers or verbal abuse. 46 The analysis suggested that low-resilient women with threatened premature labour reported higher pressures from child support concerns after delivery, less active coping, less positive affect and more negative affect. 45
---
DISCUSSION
This scoping review illuminates the gendered dynamics of vulnerability and resilience in MNCH research. Based on the 17 studies reviewed, we found that gender norms, roles and relationships significantly influence and reinforce vulnerability and resilience in maternal and child health. The role of gender-transformative interventions cannot be overemphasised in addressing these societal structures and widely held social values that perpetuate the gender inequities identified in this review. Our work highlights some promising gender-transformative interventions that should be prioritised in addressing vulnerabilities in MNCH (see table 2 for the summary). These are potential interventions based on the problems identified. Most importantly, women should have unhindered access to maternal and child healthcare services regardless of education, level of wealth, age or marriage.
As highlighted in this review, access to resources was a dominant theme in 8 of the 17 reviewed studies. [30][31][32][33][34][35][36][37] Mothers in most of the studies reported having to wait for their husbands or other relatives for funds before they could access healthcare services. This process could pose a significant threat to them and their children's health and well-being, especially during emergencies. Women's access to healthcare services is compounded by socio-cultural stereotypes that impede maternal access to healthcare services, including marriage and adolescent motherhood. Multiple studies have highlighted how cultural stereotypes and stigma may hinder healthcare access for the same people who need the service the most. 47 In some cultural settings, unmarried women and adolescent mothers are unable to access care, partly because of the emphasis on marriage and motherhood in many African societies. Many women in search of assistance have fallen victims of human trafficking rings in baby factories where their babies are sold, and then they have been held against their will, thereby compounding their woes. 48 49 However, these barriers to healthcare access can be alleviated through a multisectoral intervention that addresses sociocultural stereotypes and the high costs of access to health services, including the cost of registration, treatment and care. For example, in Nigeria, the removal of user fees and increased community engagement for the most vulnerable is associated with a higher BMJ Global Health level of maternal health-seeking behaviour. 50 Similar findings have been reported in other LMICs, including China, Zambia, Jamaica and India. 51 52 Although the abolition of user fee policies is necessary to achieve universal access to quality healthcare, multiple studies have underscored that such policies are not sufficient to improve maternal healthcare utilisation. 53 54 The removal of user fees may increase uptake but may not reduce mortality proportionally if the quality of facility-based care is poor. 55 This may especially be salient in settings where healthcare access is limited by structural barriers related to the distance of health facilities or cost of transportation, waiting times and other additional costs. 56 57 Masiye et al emphasised that the cost of transportation is mainly responsible for limiting the protective effect of user fee removal on catastrophic healthcare among the poorest households. 57 This finding is supported by Dahab and Sakellariou, who identified transportation barriers as among the most important barriers to maternal health in low-income African countries. 56 In fact, one study in our review reported that receiving financial support and helping in securing transport to hospitals for healthcare is critical. 36 Previous studies have also highlighted that poorly implemented user fee removal policies benefit more well-off women than poor ones, and in cases where there are significant immediate effects on the uptake of facility delivery, this trend is not sustained over time. 58 59 Given these findings, there is an overarching need for comprehensive and multisectoral approaches to achieve sustainable improvements in maternal health. In some studies, women who received financial incentives as a part of neonatal care or conditional cash transfers reported better healthcare-seeking behaviours than those who did not. 60 Morgan et al emphasised that financial incentives can increase the quantity and quality of maternal health services and address health systems and financial barriers that prevent women from accessing and providers from delivering quality and lifesaving maternal healthcare. 60 There is also an increasing consensus on the need to engage the community and religious leaders in challenging many of the cultural impediments to healthcare access. Countries in which these have been attempted have reported huge successes in improving healthcare access and service utilisation.
In several LMICs, women are tasked with the responsibility of childbearing and child-rearing; both could significantly affect women's economic productivity. Empowering women through skill acquisition could also offer a viable financial alternative and alleviate the high cost of accessing healthcare services, especially for women in low socioeconomic strata. Adequate incentives and support for mothers of children could also significantly ease the pressure on women to balance motherhood and economic activities. Some studies have reported the positive effects of programmes that help women with childcare. 61 62 Such empowerment programmes could also be extended to single women and women in sole-based or female-headed households, because these family types are characterised by low levels of education and household wealth.
Another important gender dimension is the need for women and mothers to make decisions about their health and well-being. As highlighted in our review, women have ► Provide adequate support and affordable childcare for mothers to enhance their productivity and participation in the labour force. ► Incentivise programmes that motivate the involvement of men in childcare and house chores.
Social norms (How are values defined?) ► Address issues regarding cultural stereotypes that impede maternal access to healthcare services, including those related to marriage and adolescent motherhood. This could be in the form of providing a friendly and safe environment for adolescent and unmarried mothers to access healthcare. ► Engage community leaders in alleviating social norms that put women and girls at risk of poor health. This includes social norms that limit the contributions of women beyond motherhood.
---
Agency and decision making (Who decides?)
► Provide universal access to safe and effective means of contraception, irrespective of the level of education and wealth. ► Strengthen the capacity of women and girls through education and job creation to contribute significantly to household decision making. ► Empower women to make decisive decisions about whether they want to have a/another baby and when they want to do so. BMJ Global Health limited contribution to decision-making processes that are related to healthcare and family planning. 40 41 This limitation is complicated by power imbalances between women and their spouses and between women and healthcare workers. 40 41 One study found that women are only aware of condoms as a means of contraception and that their male partners resist to use condom. However, they are unwilling to use other means of contraception, perhaps because of the known or perceived side effects.
Family planning services must be integrated into existing maternal and child health programmes, so that women are adequately equipped with sexual and reproductive health information and have the autonomy to choose their preferred means of contraception with minimal effects on pleasure. Male partner involvement is also crucial for PMTCT of HIV, especially because this requires mothers to use antiretroviral therapy and feed the child using formula feeding. 44 Although the involvement of the spouse during childbirth and child-rearing could alleviate some of the economic implications of motherhood, unfortunately, many male partners are not usually involved in childcare. 63 A few studies in our review reported on women's experiences of intimate partner violence and how this intersects with maternal and child vulnerabilities. [42][43][44] Women's exposure to intimate partner violence is associated with high levels of postpartum anxiety and depression and their experience of persistent poor mental health across the antenatal and postnatal periods. 42 43 The fear of intimate partner violence has also been reported to influence women's disclosure of HIV status to their spouses. 44 This occurs especially because men are often not favouring having their wives tested, fearing the indirect disclosure of their own infection. As recommended by Gaillard et al 44 and other scholars, 64 the continued counselling of women alone may not eliminate some of the maternal risks of intimate partner violence. However, MNCH programmes could alleviate these risks through couple-counselling and partner involvement in PMTCT programmes.
Aside from increasing male partner involvement in reducing maternal risks of intimate partner violence, the development of effective systems and strategies for the reporting and management of intimate partner violence and abuse is important. Many LMICs have legal structures for seeking redress for intimate partner violence; however, reporting the same has not been effective. Multiple studies have examined women's motivation to remain in violent unions. [65][66][67][68] The findings of these studies, among several others, have highlighted the subsistence and stereotypes associated with being divorced, among others. As a result, strong systems may especially be important for women in low socioeconomic status who must remain in violent marriages for survival. Altogether, these findings have pointed to the need for context and a women-centric perspective in developing strategies to eliminate violence against women, as such strategies may be inefficient if they do not address some of the bottlenecks for combating violence against women. Some studies have reported the effectiveness of women's social empowerment combined with economic empowerment in reducing women's vulnerabilities to intimate partner violence. 69 Such interventions may also provide women with resources to access healthcare services and alleviate maternal experiences of intimate partner violence. However, these interventions could aggravate experiences of intimate partner violence, especially in settings where maternal empowerment is perceived to threaten established gender norms. [70][71][72] Nonetheless, multiple studies in Tanzania have reported that maternal empowerment has led to considerable reductions in physical intimate partner violence and posed no additional adverse health risks. 69 Watts and Mayhew 73 and García-Moreno et al 74 recommended a more active approach, that is, to integrate health systems response into maternal and child healthcare. Today, there is a global consensus to strengthen healthcare professionals' ability to identify victims of intimate partner violence and provide first-line supportive care and referral to other care services. 74 A functional and well-financed health system is also important to prevent violence against women and respond to victims and survivors in a consistent, safe, and effective manner to enhance their health and well-being. 74 Health providers could probe women about their experiences of violence or evaluate them for any potential indicator of partner violence, such as any history of unexplained injury or maternal bleeding, preterm labour or birth, and foetal injury or death. 73 The healthcare system can also provide women with a safe environment in which they can confidentially disclose experiences of violence and receive a supportive response.
Although our review addresses an important gap in the literature, it is not without limitation. The first is that the inclusion of articles in this review is based solely on their focus on vulnerability or resilience in LMICS. Therefore, studies on vulnerability or resilience outside of LMICs, in locations where pockets of vulnerable populations occur in high-income nations have not been captured. Additionally, while we made every attempt to find all accessible material, it is possible that we omitted some publications with distinct perspectives that were not represented in the review's evidence from grey literature particularly, given how broad it is.
---
CONCLUSION
Only a few studies have examined vulnerability and resilience in maternal and child health, especially in LMICs. We have identified some gendered dynamics of vulnerability and resilience in MNCH through this scoping review. Findings from this scoping review suggest that there is a great need to continue to empower women and mothers to access resources, contribute to decisions about their own health, and eliminate structural or social stereotypes that limit their agency.
---
Contributors OAM conceptualised the review, developed the initial search strategy for the study, screened studies for eligibility, reviewed draft manuscript and supervised the overall research. OAU developed the search strategy and machine learning programme for study screening and screened studies for eligibility. EOO and FAS drafted the manuscript. NKI, ICM and BO screened studies for eligibility, performed data extraction and reviewed the draft manuscript. All authors read the manuscript, contributed to the revisions as required and approved the final manuscript. OAM takes responsibility for the overall content as the guarantor.
Competing interests None declared.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
---
Patient consent for publication Not applicable.
Ethics approval This study did not receive nor require ethics approval, as it does not involve human and animal participants.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement All data relevant to the study are included in the article or uploaded as online supplemental information. Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise. | 37,858 | 2,657 |
c1cf0d667dd736801b44a7904dd9d8df72a86adb | Socioeconomic disparities in the management of coronary heart disease in 438 general practices in Australia. | 2,020 | [
"JournalArticle"
] | CHD patients (aged > _18 years), treated in 438 general practices in Australia, with > _3 recent encounters with their general practitioners, with last encounter being during 2016-2018, were included. Secondary prevention prescriptions and number of treatment targets achieved were each modelled using a Poisson regression adjusting for demographics, socioeconomic indicators, remoteness of patient's residence, comorbidities, lifetime follow-up, number of patient-general practitioner encounters and cluster effect within the general practices. The latter model was constructed using the Generalised Estimating Equations approach. Sensitivity analysis was run by comorbidity. | Introduction
Coronary heart disease (CHD) remains the leading cause of death and disability globally despite significant advances in its diagnosis and management over the past decades. In Australia alone, in 2017-2018 more than 580,300 adults (approximately 312 cases per 10,000 population) have self-reported CHD, which, in turn, accounted for 12% of all deaths and more than 160,438 hospitalisations (approximately 166 admissions per 10,000 public and private hospital separations). 1,2 In Australia, as in the USA and UK, 3,4 CHD disproportionately affects the most socially-disadvantaged and those living in the more remote geographic locations. 5 For example, the corresponding rates for prevalence, hospitalisation and death from CHD in the lowest socioeconomic areas are 2.2, 1.3 and 1.6 times that of the highest socioeconomic areas. 2 Similarly, the rates for CHD hospitalisation and CHD death in remote or very remote areas are 1.5 and 1.4 times that of major cities. These differences are partly due to the socioeconomic gradient in the prevalence of cardiovascular risk factors such as smoking and obesity. 2 Moreover, geographical disparities in both access to treatment and its affordability are likely contributors to the variation in the CHD burden in the Australian and other populations. A recent survey in Australia reported that of people who received a prescription for any medication in the past 12 months, 7% delayed getting or did not get the prescribed medication due to cost. 6 Moreover, a systematic review found that over half of the studies that focused on access to drug treatment for the secondary prevention of CHD reported lower treatment rates for patients with low compared with those with high socioeconomic status (SES). 7 Primary care is an important component in the secondary prevention of CHD. General practitioner (GP) visits, preparation of a chronic disease management plan and use of cardiovascular medications after hospitalisation for CHD have been shown to reduce the risk of emergency readmission and death from cardiovascular disease. 8,9 Guidelines for the management of all patients with CHD in primary care have been available in Australia since 2012. 10 However, as we have shown in a recent report, their adoption is not yet universal and significant disparities exist in their application such that men are more likely than women to receive a general practice management plan from their GP. 11 The aim of the current study was to investigate in a large national general practice dataset, MedicineInsight, whether disparities in the management of CHD exist based on socioeconomic indicators and remoteness of patient's residence.
---
Methods
MedicineInsight is a large-scale Australian national general practice database of longitudinal de-identified electronic health records established by NPS MedicineWise with core funding from the Australian Government Department of Health. [11][12][13] Adults (aged > _18 years) with CHD who had had > _3 encounters with their GPs, with last encounter being during 2016-2018, were included in this population-based study (Supplementary Material Figure 1 online). Patients with CHD were identified through an algorithm developed by NPS MedicineWise, 11 which utilised information from relevant coded entries or free-text terms recorded in at least one of three fields -diagnosis, reason for encounter, and reason for prescription (Supplementary Table 1).
The general practice management plan for CHD is a tool developed in Australia for the secondary prevention of CHD in primary care. 14 The recommendations that this study investigated have been published. 11 Secondary prevention prescriptions were considered if these were prescribed during the study period. Missing data or lack of documentation of the measurement of risk factors were considered as non-assessment during the study period. The SES was based on the Socio-Economic Indexes for Areas -Index of Relative Socio-Economic Disadvantage (SEIFA-IRSD), 15 which is a residential postcode-based composite score that ranks geographic areas across Australia according to their relative socio-economic advantage or disadvantage. This study's SEIFA-IRSD scores were based on patients' most recent residential addresses as these were recorded in the last patient-GP encounter during the two-year study period. We further categorised the Australian Bureau of Statistics SEIFA-IRSD deciles into five groups.
---
Statistical analysis
The proportions of patients (a) with secondary prevention prescriptions during 2016-2018; (b) assessed for risk factors; and (c) who had achieved treatment targets were reported by SEIFA-IRSD fifths (i.e. first (most disadvantaged), second, third, fourth and fifth (least disadvantaged) and by residential remoteness (i.e. major city, inner regional, outer regional, and remote or very remote). The direct standardisation method was used to estimate age-and sex-standardised proportions utilising the prevalence of CHD in the Australian standard population as reported in the National Health Survey 2017-2018. 1 Differences by SES and remoteness in the age-and sex-standardised figures were evaluated, respectively, using chisquare tests. Spearman's rho correlation coefficient tested for monotonic changes in the relationship between SEIFA-IRSD and other variables.
Secondary prevention prescriptions and number of treatment targets achieved were each modelled using a Poisson regression. To account for variations in achieving treatment targets during the study period, we ran the latter model using the Generalised Estimating Equations approach while accounting for three possible measurements of risk factors related to treatment targets shown in Supplementary Table 2. For each patient in the two-year study period, the baseline available, randomly selected and last available measurements were used. Single measurements per patient per study period were carried over to all three.
The models adjusted for age, sex, residential remoteness, SES, indigenous status, state and territory, body mass index (BMI), smoking status, acute myocardial infarction, heart failure, diabetes, hypertension, stroke, chronic kidney disease, depression, anxiety, lifetime years of follow-up and number of patient-GP encounters during the two-year study period. The standard errors were adjusted for correlation within 438 general practices using the cluster sandwich estimator. In the treatment targets model, diabetes, hypertension, BMI and smoking were excluded as these were incorporated in the targets.
The dose-response effects of different levels of socioeconomic disadvantage on number of secondary prevention prescriptions or number of treatment targets achieved were tested using likelihood ratio tests, with nested regression models being compared to determine whether a model was rich enough to capture data trends. The nested models that assessed treatment targets were based on the randomly selected measurements.
---
Sensitivity analysis
Sensitivity analyses were conducted by prevalent comorbidities. The forest plots, showing age-, sex-and SES-adjusted incidence rate ratios of study outcomes by condition, were constructed using random effect models.
We further used multiple imputation by chained equations to generate the missing data on the randomly selected measurements using the mi Stata command, with 50 imputed datasets and final estimates obtained using Rubin's rules. 16 The Poisson regression modelling treatment targets was re-run using the imputed dataset.
All analyses were performed using Stata/SE 15.0 (Stata Corp LP., College Station, Texas, USA).
---
Ethics clearance
---
Results
General practice records for 137,408 patients with CHD (46.6% women) were analysed. Of these records, 81.8% were from 2016-2018, 15.8% from 2015-2017 and 2.3% from 2014-2016.
---
Patient characteristics by SES and remoteness
Patient characteristics varied by SES (Table 1). Patients belonging to the most disadvantaged fifth group were the oldest (mean age 67.0, SD 16.1 years compared with 66.2, SD 16.8 years in all other groups combined, p < 0.001). This was reflected in a higher prevalence of comorbidities in this most disadvantaged fifth (Supplementary Table 3) and higher patient-GP encounters in the study period (Table 1). Socioeconomic disadvantage also varied by residential remoteness. Approximately 75% of individuals living in 'outer regional locations' belonged to the two lowest SES fifths compared with 58.4% in 'remote or very remote locations' and 56.7% in 'inner regional locations' (Supplementary Table 4). Patients residing in major cities were the least socioeconomically disadvantaged with approximately one-quarter of patients in the lowest two SES groups. The oldest patients resided in inner regional locations while the youngest were in remote or very remote locations. Prevalence of major comorbidities was lower in this latter subgroup (Supplementary Table 4).
---
Prescription of medications by SES and remoteness
Higher proportions of patients from the most disadvantaged group were prescribed with any of the five recommended medications compared with other socioeconomic groups (Figure 1). A significant monotonic association between SES and being prescribed all of the four medications recommended for daily use (i.e. excluding shortacting nitrates) was observed, with number of prescribed medications incrementally increasing as SES declined (Spearman rho = À0.106, p < 0.001). In the risk-adjusted model, patients in the most disadvantaged fifth were 8% more likely to be prescribed more secondary prevention medications compared with the least disadvantaged group (incidence rate ratio (IRR) 1.08, 95% confidence interval (CI) 1.04-1.12, p < 0.001) (Table 2).
The highest proportions of patients prescribed with any of the medications for secondary prevention were observed in inner regional areas and the lowest proportions were observed in remote or very remote areas (Supplementary Figure 2), aligning with the different respective ages of these groups. In the risk-adjusted model, prescriptions in major cities, and inner and outer regional locations were alike whereas patients residing in remote or very remote areas were 12% less likely to be prescribed medications for secondary prevention than those in major cities (IRR 0.88, 95% CI 0.81-0.96, p = 0.003) (Table 2).
---
Assessment of risk factors by SES and remoteness
During the two-year study period, between 92% and 95% of individuals had their smoking status and blood pressure assessed by their GP whereas approximately 75% had their blood lipid profile tested and only 18-27% of individuals had their waist circumference (as a measure of central obesity) measured. A negative association between SES and risk factor assessment was observed, with factors being less evaluated as the SES rose (p < 0.001 in all) (Supplementary Figure 3). In contrast, the assessment of risk factors by remoteness varied by risk factor assessed with increased proportions assessed in patients living further away from major cities (Supplementary Figure 4).
---
Achievement of treatment targets by SES and remoteness
Of the patients who had their risk factors assessed, and using the last available measurements, targets were more likely achieved in patients belonging to higher socioeconomic classes (Figure 2), with similar patterns observed when treatment targets were based on first-, randomly-selected-or last-available measurements, as shown in Supplementary Figure 5. In the risk-adjusted model that accounted for three possible measurements per patient, the likelihood of achieving treatment targets dropped incrementally as SES declined. Individuals residing in remote or very remote locations were least likely to achieve risk factor targets (Table 3). A dose-response effect between SES and number of treatment targets achieved was found (likelihoodratio test chi-square = 3.59, p = 0.309).
In all models, interaction between socioeconomic disadvantage and residential remoteness was tested by the introduction of interaction terms into the regressions. No evidence of interaction was found based on the non-significant regression-derived p value for the interaction term: p > 0.05 in all.
---
Sensitivity analyses
To test for consistency, we further separately tested study outcome measures by prevalent comorbidities while comparing low to high SES halves with results consistently supporting the study's main findings (Figure 3).
Results obtained following multiple imputation supported the study's main conclusions (Supplementary Table 5).
---
Discussion
This nationwide study of general practices in Australia indicates that among those living with CHD, secondary prevention management is influenced by levels of both SES disadvantage and patient residential remoteness, but in opposing ways. Individuals with CHD residing in remote or very remote locations were significantly less likely to be prescribed medications for secondary prevention compared with those living in major cities. They were also less likely to achieve treatment targets. Conversely, the most socioeconomically disadvantaged individuals were more likely to be prescribed medications for secondary prevention and were more likely to be assessed for cardiovascular risk factors (but less likely to achieve risk factors targets) compared with those who were the least socioeconomically disadvantaged.
Australia provides universal health care, which includes subsidised healthcare services through the Pharmaceutical Benefits Scheme (PBS) and Medicare Benefits Scheme (MBS). Items listed on the PBS scheme usually involve a co-payment with a lower co-payment for low income earners and Indigenous Australians living with or at risk of chronic illness. 17 Despite these concessions a higher proportion of patients in the most disadvantaged groups do not fill prescriptions due to cost. SES disadvantaged patients with chronic diseases often struggle with out of pocket expenses negatively impacting on their health outcomes. 18 This may have contributed to the lower proportion who achieved targets in comparison with those in the least disadvantaged group. Patients from more disadvantaged areas are also likely to be at higher cardiovascular morbidity. An Australian study reported a dose-response relationship between socioeconomic disadvantage and admission to a coronary care unit or intensive care unit among patients presenting with non-traumatic chest pain. 19 The socioeconomic disparities observed in the current study may be attributed to a range of socioeconomic determinants of health and health behaviours, 20 rooted in social rank as determined by knowledge of risk factors of disease, 21 SES-associated educational gradients, 22 health literacy and patient-physician communication, 23 occupational hierarchy and income. CHD is a multifactorial disease with clinical, genetic, behavioural and lifestyle risk factors often interacting and contributing to a higher level of coronary risk. 24 Of these, modifiable lifestyle and behavioural risk factors, such as poor diet, physical inactivity, smoking and obesity disproportionately affect individuals coming from the most disadvantaged groups. Similar to our findings, studies have consistently reported such disparities in cardiovascular health also in countries with universal access to health care and after stratifying by smoking, comorbidity and obesity. 25 An Australian study on utilisation of health services in adults aged > _45 years reported that a higher proportion of people in less disadvantaged groups did not fill a script compared with more disadvantaged groups of the population. 26 Paradoxically, however, patients from the least disadvantaged group were more likely to have achieved more treatment targets compared with those from the most disadvantaged group. It is possible that patients in the least disadvantaged group had their CHD managed by specialists rather than GPs: the same health service utilisation study reported that a higher proportion of people in the least disadvantaged group claimed the MBS service for specialist treatment compared with other socioeconomic groups (55% versus 48-49%). 26 Alternatively, individuals in the least disadvantaged groups may have opted to reduce risk factor levels by non-pharmacological means through the modification of lifestyle and behaviour.
In regard to CHD management by level of remoteness, dispensing rates for cardiovascular medication were generally higher in inner regional areas and lowest in remote or very remote areas despite the higher burden of CHD in rural populations, consistent with earlier reports. 27 Notably, our data do not suggest that this dispensing pattern is due to a lower SES status among those living in the most remote areas of the country; although major cities had the lowest proportion of the most disadvantaged individuals, there was little relation between SES status and remoteness. For example, in this sample, 75% of individuals living in 'outer regional locations' belonged to the two lowest SES fifths compared with 58% in 'remote or very remote locations' and 57% in 'inner regional locations'.
A key strength of the current study is that we used a large and contemporary national GP dataset in Australia. Nevertheless, our results may not be entirely representative at a regional level since general practices participating in MedicineInsight had to have had computerised records. 12 GP practices in locations that rely on paper-based records are not represented in this study. Our study utilised routinely collected data that are not intended for research purposes, hence there may have been errors in reporting and/or coding, and validation concerns. Missing information on blood pressure, smoking status and weight could be due to lack of documentation rather than lack of assessment. 13 We had no knowledge on contraindications which may have accounted for a small proportion of under-prescribing. We lacked information on specialist care, which may have contributed to the relatively lower prescription, but higher target achieved rates in the least disadvantaged group. We also lacked drug dispensing data which could have informed whether medication non-adherence or ineffective treatment led to non-achievement of treatment targets. Furthermore, any residential address changes over time were unknown to us and were unaccounted for.
This study identifies important implications for policy and clinical practice, notably that despite Australia's universal healthcare system, the level of CHD management received is influenced by SES and remoteness of residence with the widest management gap observed in individuals coming from disadvantaged backgrounds and patients coming from remote or very remote locations. The documentation rates we report imply a continued need for programmes of support to increase screening for risk factors for CHD and documentation of related clinical information, in accordance with the recommendations in the National Health and Medical Research Council guidelines. 10 More research is needed to understand clinical and patient behaviours and assess whether incentives of policy may help drive change in health behaviours.
---
Supplementary material
Supplementary material is available at European Journal of Preventive Cardiology online.
---
Author contribution
GM analysed the data, co-drafted the manuscript and is guarantor of the study. CMYL conceived the design of the study, secured funding for the study, obtained the data and co-drafted the manuscript. FS and SR secured funding for the study. MW provided statistical oversight. CKC provided clinical advice. RRH conceived the design of the study and secured funding for the study. All authors a Model also adjusted for past years of follow-up, number of patient-general practitioner encounters and cluster effect within 438 general practices.
---
CI: confidence interval
| 20,021 | 676 |
6bc02c564cd7ba102b3cecb111246b663e7733c5 | A divine cosmopolitanism? Religion, media and imagination in a socially divided Cairo | 2,016 | [
"JournalArticle"
] | With a focus on young Egyptian women, this paper explores the different ways it becomes possible to reconcile a Muslim identity with a cosmopolitan openness towards the world. Informed primarily by transnational television, these women articulate a divine cosmopolitan imagination through which they form multiple allegiances to God, the nation and global culture simultaneously. Thus, a close analysis of their regular consumption of transnational television helps challenge linear and somewhat naturalized preconceptions of how Muslims articulate perceptions of self and others. In the articulation of both their cosmopolitan imagination and religious identities, young Egyptian women have become skilled negotiators, moving within and between mediated and non-mediated discourses. They move physically within a grounded place that sets the moral boundaries for bodily existence, yet shift subjectively between disembedded spaces of mediated representation, often providing new contexts for meaning and inclusivity. The result, for young Egyptian women, is a divine cosmopolitan imagination. | Introduction On Being Cosmopolitan and Religious
I remember once declaring to a group of acquaintances in London that I consider myself to be both very religious yet also cosmopolitan. I was unsure whether their surprised expressions and cynical reactions were caused by my association between being cosmopolitan and religious per se, or my admission to being cosmopolitan and my specific identity as a Muslim. As a young veiled woman who abstains from alcohol and follows the main teachings of my Islamic faith, perhaps they could not comprehend what exactly I considered to be 'cosmopolitan' about myself. The fact I was socialising with them (all non-Muslims representing three different countries) in a global café chain in London, speaking in English and discussing American foreign policy in the global south, did not detract from the fact that I am, underneath all this, still 'a Muslim'. In true Jihad vs McWorld style (Barber, 2003) Islam appeared to summon up images of parochial and intolerant groups, following the 'word of God' while closing themselves off from any other worldly forms of progress or development. In direct contrast, they equated being cosmopolitan with adopting an open and outward perspective; about being modern (secular?), globalized and hungry for cultural diversity. If such a binary outlook is taken for granted, then surely it presupposes that cosmopolitan and religious perspectives will remain segregated worlds, leaving the possibility of being a cosmopolitan Muslim no more than an unachievable oxymoron.
To challenge such a linear and somewhat naturalized preconception of how Muslims articulate perceptions of self and others, this paper demonstrates the complexities characterizing identities in a modern world of trans-temporality and intense mediated connectivity, and the ways in which identities are formed in layers (Georgiou, 2006) and informed by multiple attachments and connections 'of different types and at different levels' (Morley, 2000:232). Detailed ethnographic evidence from Egypt illustrates the ways in which young Muslim women negotiate their identities at the juxtaposition of age, class experiences, dominant discourses of gendered morality, religious values and a mediated articulation of global culture. As such, against a backdrop where mediated and non-mediated discourses represent inseparable spheres of influence in these women's lives, I analyze how an ongoing dialogue between the local and the global, self and others, distance and proximity, the secular and the religious coalesce both virtual routes and grounded roots through which they articulate a divine cosmopolitan imagination.
A comparative class-based analysis between the experiences of young working-class and lowermiddle-class Egyptian women allows me to explore different ways it becomes possible to reconcile a religious and specifically Muslim identity with a cosmopolitan openness towards the world. I bring to the fore the centrality of transnational media as primary cultural resources through which these young women articulate and assess the world around them, both immediate and faraway. While almost every female participant in this study has never travelled outside of Egypt and, in many cases, does not even own a passport, they rely heavily on the media as their only 'passport' onto the outside world, expanding their imaginative horizons and exposing them to the possibility of alternative realities, lifestyles and modes of expression. I draw on Abu-Lughod's (1995) seminal and much celebrated analysis of female domestic servants' consumption of local televised serials in Egypt, and they ways in which the dramatic narrative became a reassuring private space in which these women could be excessively melodramatic, exploring other, more desirable situations and identities unavailable to them in their everyday lives. While broadening my own analysis to encompass transnational television, I bring to the fore evidence of how televised repertoires of globality function as dynamic multi-way channels of negotiation for my female informants that mutually-reinforce and shape both cosmopolitan and religious identities. On the one hand, Egyptian women's highly mediated cosmopolitan orientations are negotiated and filtered in relation to values and moralities that stem from very grounded religious identities. In turn, these religious identities themselves are being constantly weighed and re-assessed in light of a mediated exposure to diverse cultural happenings. For young Egyptian women, therefore, the question has never been whether it is possible for one to be a pious Muslim and modern cosmopolitan. For them, the dilemma is how such a delicate balance is best subjectively and physically balanced on the ground, allowing them to conform to the divine values of their religion and the moral boundaries of their society, while also making full use of the potentials offered by a diverse array of cross-cultural connections.
For many, the incompatibility between Islam and cosmopolitanism was compounded in July 2013 after the doomed fate of political Islam in Egypt was sealed when Mohammed Morsi -the country's first ever President to arise from an Islamic party -was ousted by the military after just one year in office following days of mass civilian protest. The sour collapse of Muslim Brotherhood rule in Egypt has paved the way for numerous voices openly questioning whether Islam can ever be accommodating to modern, progressive and cosmopolitan ideals such as democracy and individual liberties (Nawara and Baban, 2014;Rakha, 2013). Crucially, although religion may have failed many Egyptians in relation to electoral politics and democratic representation, this must not detract from the fact that Islam continues to capture the hearts and minds of ordinary Egyptian citizens, claiming a constant and very natural presence which is highly visible, manifest and inseparable from the fabric of daily life. This was illustrated in the fact that although millions of Egyptians went out onto the streets to demand the early exit of the Brotherhood from the seat of power in the summer of 2013, concurrently, the constitutional declaration that was announced soon thereafter responded to mass pressure to recognize Islam as the state's official religion, and for Islamic Sharia to be clearly pronounced as its main source of jurisprudence. Mahmood (2005) captures how the discord between private and official articulations of religious discourse shaping Egypt's socio-political landscape is nothing new and dates back to the Islamic Revival of the 1970s. While a popular piety movement that developed in the 70s established religious knowledge as a vital means of organizing daily conduct for ordinary Egyptians, there were strong attempts to marginalize this under secular governance (Mahmood, 2005). In the light of such a complex and often contradictory political and social backdrop that has long plagued Egypt, and while recent post-uprising times involve struggles to (re)define how best to establish a modern nation-state both drawing on cosmopolitan democratic values while accommodating deep-seated religious principles, the timeliness of the discussion driving this paper is indisputable. I draw on nine months of rich ethnographic fieldwork conducted in Cairo and completed in the crucial few months immediately prior to the 2011 revolution. With access to such unique data, I hope to transfer the debate about religion's place in Egypt from formal discussion tables and parliamentary houses to the Egyptian people themselves, and especially women, whose opinion continues to be marginalized in the Egyptian public sphere.
---
A Cosmopolitan Imagination and the Media in a Socially Divided Cairo
The 2011 revolution was a very visible manifestation of the central role modern forms of media technology can play in helping shape the demands and social aspirations of Egypt's young generation. Women in particular emerged as central players within media space fuelling academic interest into the question of gender within the 'Arab Spring' and the ways prominent female activists played a central role in using their broad social media presence to mobilize and push for grassroots action. Enlightening as such research undeniably is, it often creates an artificial chasm between the media's seemingly insignificant and invisible role before 2011 and their substantial political potentials that Egyptian women 'suddenly' discovered post-2011. 1 However, findings from my research illustrate that beyond the Internet's role within the immediate moment of radical revolutionary change, the more long-term, yet less glamorous, banal ordinariness associated with television consumption as a daily practice, should not disguise its potential as a vehicle often partly establishing the conditions for change or dissatisfaction. Such a thesis is supported by Morley (2006:104) in his argument that the impetus for political transformation often comes from the many 'micro instances of "pre-political" attitude change' articulated through long term media consumption. On that score, I suggest that although media use was much less politicized, radical or even noted in the Egyptian public sphere prior to 2011, it still played a vital role in the everyday lives of women, particularly functioning as vital tools enabling them to assess, understand, negotiate and critique the world around them.
Daily access to transnational television has allowed young women participating in this research to become increasingly globally interconnected and aware of the presence of the distant other within media space, allowing even those with the least means to be included within this reality of (virtual) interconnection (Schein, 1999;Silverstone, 2007;Ong, 2009). As 22-year-old Dalia told me:
We live in a society where everything is controlled, particularly if you're a woman. Your family control where you can go and how you should dress, and the state controls how you live and what you can say. But they forget that we are a generation who has grown up with the media and so we see and hear alternatives; we know that there are other places in the world where citizens are respected, regardless of their gender, colour or economic background. What stops us from being like them? Life in Egypt has become unbearable and it's almost like a pressure cooker-we will explode at any moment.
Eighteen months after this poignant assertion, Dalia and many young women who participated in this study, flooded Cairo's streets in a momentous revolution supported to a large extent by the media and underpinned by a demand for 'bread, freedom and social justice'. In this light, I use the term cosmopolitan imagination with reference to how cosmopolitanism, for these young women, takes the form of a dynamic subjective space driven by a sense of connection and belonging to the outside world. Primarily through the media, such an imagination expands the cultural horizons of young Egyptian women, allowing them to engage in a re-imagination of local particularities and to adopt a more reflexive understanding of limits placed on the self (Elsayed, 2010). As such, a cosmopolitan imagination in Cairo cannot be understood through linear categories of analysis often used in theories of cosmopolitanism that draw entirely on the experiences of Western secular and liberal contexts. In a situation where their own realities are so dismal, characterized by poverty and state repression, young Egyptians' cosmopolitanism is not about an ethical concern for a distant other (Silverstone, 2007;Chouliaraki, 2008). In a context where the majority of participants I worked with have never travelled outside of Egypt, cosmopolitanism is not about physical mobility, patterns of global travel and first-hand experience of the world (Hannerz, 1996). Furthermore, the centrality of the nation to their daily experiences means cosmopolitanism in Cairo is not about a rootless form of identification with a 'universal' common humanity, attributed mainly to Kant and his Enlightenment ideologies (Kant, 2010;Nussbaum, 1994). Even within Middle Eastern scholarship, the concept of cosmopolitanism has been deeply impoverished and underdeveloped (Hanley,2008) and has often become attributed solely to elite circles able to sustain exclusively Westernized lifestyles and secular forms of practice such as alcohol consumption (e.g. Zubaida, 2002).
In contrast to such predetermined categories that approach cosmopolitanism as a static or fixed criterion (Hanley, 2008), my understanding of cosmopolitanism arises out of sustained empirical work and ethnographic research. I argue that cosmopolitanism in Egypt is exercised through internal heterogeneity (Elsayed, 2010) where these young women embedded within the specificities of daily life in Cairo internalise national, religious and transnational discourses in unique ways that lead to new avenues for self-understanding. Thus, by shifting the emphasis away from what cosmopolitanism should be to what cosmopolitanism actually means to these women, I approach cosmopolitanism as an actually-existing, practiced and lived identity that although physically rooted in place, becomes a multi-node space where both inward and outward facing cultural connections are dialectically interlinked. I draw particularly on Beck who argues that a cosmopolitanism existing in the real world is not an idealistic vision associated with a 'glittering moral authority ' (2004:135), but a deformed entity that organically takes shape in different forms in an everyday context. Indeed, in my engagement with young Egyptian women, I illustrate that even within the same national context cosmopolitanism takes on two different forms in relation to socio-economic differences.
Embarking from the above premise, it becomes possible to avoid positioning religious and cosmopolitan perspectives necessarily as dialogical counterpoints. Indeed, I argue how young Egyptian women's cosmopolitan imagination is not a way of abandoning or transcending local and religious ties, but in fact, as I illustrate below, these young women's religious identity is a fundamental springboard from where the negotiation of their cosmopolitan imagination commences, and a moral filter against which their understanding of the world is constantly measured. This is similar to Diouf's (2000) investigation of the biography of a rural Senegalese Muslim Brotherhood network which engaged creatively with a governing Western colonial order in ways that corresponded to, complemented, and ultimately benefited their Islamic identity. The case of young Egyptian women illuminates how the media have provided creative spaces for the revaluation and reflexive interpretation of local identities and particular experiences, thus giving them access to alternative routes through which they can be at once modern Muslims and pious cosmopolitans. For many, these may seem like contradictory pairings, but in a situation where the Koran and television set represent these women's two most important sources of information about the world, a divine cosmopolitan imagination could not be more natural.
---
Television Consumption amongst Women in Cairo
This research is based on the responses of 32 Egyptian Muslim women between the ages of 18 to 25, equally split between the lower middle and working class. Extensive changes that have befallen Egypt's social, political and economic fabric over the last five decades have rendered the idea of a single homogenous middle-class stratum increasingly redundant (Abdel Mo'ti, 2006;De Koning, 2009;Amin, 2000). What was once a sizeable, relatively coherent urban middle class (Abaza, 2006;Ibrahim, 1982) formed under Nasser's 1960s communist government, soon began to divide after the introduction of liberal economic policies by Sadat in the 1970s. A small wealthy, privately educated upper middle class able to sustain standards of cultural and economic capital associated with free markets and a global modernity came into existence alongside a majority lower-middle class who remained acquainted with more humble lifestyles affiliated with localized forms of belonging such as a public education and/or government sector jobs (Abdel Mo'ti, 2006;De Koning, 2009;Amin, 2000).
Though my original research from which this paper is drawn involves comparisons between the upper middle, lower middle and working class, due to space restrictions of this paper, the current discussion focuses on the latter two. Importantly, I draw on Ibrahim's (1982) model and approach class in Cairo as a complex socio-economic category defined through a range of interrelated indicators including income, education, occupation and lifestyle. Conscious of the fact that accurately defining social class categories in a complex society such as Egypt is a mammoth task (Abu-Ismail and Sarangi, 2013;Beshay, 2014), education became a particularly vital indicator in my study and point of entry into the divergent lives of my two groups of women. Research has shown that type (public/private) as well as level (intermediate/higher) of education are effective measures of social class differences in Egypt as they underpin the distinct and segregated classed worlds of young Egyptians (Gamal El Din, 1995;Haeri, 1997). Subsequently, I approached the working class through a further education college which offers intermediate diplomas in computing, and the lower middle class through the Faculty of Education at one of Cairo's public universities.
Respondents were split into four focus groups: two in the lower middle class and two in the working class, allowing me to cross-check the validity of each group session. The dynamic and interactive nature of the group discussions allowed me to become instantly aware of the centrality of the media to the daily lives of my young participants. Indeed, all the women I questioned admitted to having at least one television set in their household, while they all owned mobile phones and usually had indirect access to the internet through their friendship networks and frequent presence in internet cafes. In most cases, it was asserted -across both classes -that at least three hours of their daily time is dedicated to television consumption. As a result of their limited financial means and thus inability to entertain themselves outside the home, and in the case of the working-class -limited cultural capital acquired through basic education -it was clear to see how they depended greatly on television as an important vehicle of information, education and entertainment.
Both groups of women had access to satellite broadcasting in their households, and thus terrestrial television was usually shunned in favour of regional Arabic channels. The MBC package 2 , owned by prominent Saudi business tycoons, was especially well received. MBC2 -a 24-hour movie channel broadcasting contemporary Hollywood movies -and MBC4, dedicated entirely to the latest American serials and light entertainment programmes, were by far the most popular channels.
Unsurprisingly, therefore, American movies and serials emerged as the two genres preferred the most across both classes, although Egyptian drama was also significantly popular. It was very interesting to learn how both groups of women stressed that they preferred to watch foreign movies on a regional broadcaster such as MBC rather than directly from a foreign source. This could be driven by the obvious fact that these women had little access to foreign channels in their households. Western (mainly American) channels in Egypt are predominantly available on exclusive satellite packages that carry a monthly subscription fee and, as such, are mainly accessible only to the wealthier upper classes. In contrast, most of my participants stated that they received the Eutelsat and Nilesat free-to-air satellites, which predominantly broadcast regional Arabic channels from across the Middle East at no ongoing monthly cost. Some of the women did mention that, occasionally, if a signal was strong enough, they were able to receive a limited number of European channels. However, the usual presence of one television set in the home, located in a communal area such as the living room meant that their viewing practices were often monitored by parents and older (male) siblings and thus subject to censorship procedures that usually involved European channels being encrypted. One participant mentioned that her father referred to foreign channels as the 'devil' that annulled ones prayers and thus he felt forced to ban such channels to ensure that more vulnerable members of the household -particularly women and children -are protected from the 'corrupting influences' of uncensored Western material.
Other than the practicalities of access (or lack thereof) to foreign channels and the restrictions of monthly subscriptions, another two reasons drive these women's preference to follow Western programmes on regional Arabic broadcasters: firstly, translation services provided by MBC (either through subtitles or dubbing) mean these women can overcome their limited English-language abilities and enjoy movies in their native Arabic language. Secondly, nearly all women across both groups appreciated the censorship policies observed by MBC, and thus they felt much more comfortable watching these movies with the prior knowledge that any obscene language or overt sexual scenes would be removed. Indeed, being primarily owned by Saudi investors, and thus associated with 'one of the region's most tightly-controlled media environments' (BBC News Middle East, 2013) channels such as the MBC package are subject to strict self-censorship policies that avoid criticising the government or contradicting Saudi Arabia's ultraconservative Islamic Wahabi doctrine.
Another noteworthy point is that foreign genres such as American movies were especially central in conversations related to these women's religious identity. Both groups claimed that Islamic TV channels -which were very abundant and popular at the time -formed an important part of their television consumption practices. Nevertheless, what was interesting, was that their interaction with these religious channels was characterised by more targeted viewing practices e.g. they would view them to follow a particular theological discussion or to listen to a fatwa on a particular issue, such as gender segregation. However, more generally, to be exposed to different cultures, and to explore how to be confident Muslims in touch with the rapid changes of contemporary times, Western drama genres represented more comprehensive and broadly informative windows onto the outside world. Thus, in essence -and as will be discussed further below -these young women are using secular, foreign media formats to partly negotiate what are very religious and locally rooted identities. This is a strong indicator of how these women's divine cosmopolitan imagination is comprised of multiple identity layers that shift continuously and very smoothly between mediated and non-mediated spheres of influence, allowing them to remain loyal to and observant of the moral boundaries of their faith, yet while reaping the benefits of an open and accessible transnational media network. I will expand upon these points in the remainder of this paper where I embark on a more focused class-specific discussion about the unique ways both groups of women merge religions and cosmopolitan perspectives.
---
Lower Middle Class Women
In my conversations with lower middle class women, their religious identity was accorded a position of central importance and was a primary factor in defining their sense of self. It was particularly interesting to hear how religion for these women was almost synonymous with, or interchangeable with their sense of national identity; I was often told that patriotic sentiments cannot be divorced from the strength of ones devotion to their religion or connection to God. As 23-year-old Hadeel informed me:
To be a truly patriotic Egyptian, you firstly have to be a good Muslim who is well aware of their religion and its main ethos. Islam teaches people to live together despite their socio-economic or even religious differences, to respect their leader, to protect their nation against an enemy or intruder-and thus to be a loyal and respectful citizen. Such a strong religious assertion, that spanned the lower middle class, was very often in dialogue with a global articulation of culture. Indeed, many of the lower-middle class women I engaged with, talked about Islam as being a religion that is by default cosmopolitan, its main ethos strongly predisposed towards cross-cultural integration. According to Rowayda, 18, although Islam originated in the Arabian Peninsula, it obliges its followers to integrate with others of diverse backgrounds in order for its 'message of peace' to spread across the globe. Verses from the Koran were routinely quoted in proof of this, such as 'We have made you into nations and tribes that you may know one another' or 'Travel through the earth and see how Allah originated creation'. Despite this, the limited financial capabilities facing many of these women means the majority of them see little hope of travelling beyond Egypt. In this context, therefore, their ability to see the world and experience different cultures via television and in the comfort of their own home is vital. As 22 year old Nadine told me:
The world belongs to God; it is all His land and He has ordered us to travel, to integrate, to mingle and to explore. I believe that every Muslim should travel widely if they are able to do so as seeing first-hand the wonders of the world, the rich diversity characterizing different peoples and the beauty of this earth, will strengthen ones faith and love for God-who is ultimately the creator of all these miracles. In our modern times there is no excuse as television means we do not need to exert time, effort or money to go out to the world, but the world comes to us as we sit comfortably in our chairs. This quote is indicative of a divine cosmopolitan imagination that although remains firmly grounded through an obligation to observe and fulfill very specific religious duties, is simultaneously driven by a reflexive desire to refashion such duties as being part of a worldly and outward looking perspective craving knowledge of and participation with the global other. Ironically, in what they experience as an organic fusion of the divine and the secular, a modern and mainly Western-inspired technological medium such as television -very often shunned by older Islamic scholars as being 'sinful' -has become these young women's primary means of integrating with other cultures; thus fulfilling what they consider to be a deeprooted religious obligation. Importantly, television is not simply a means for these women to observe a faraway world, but I quickly learned how the knowledge they glean from these mediated encounters becomes an intimate part of their self-assessment and the way they perceive, assess and make sense of their religion. The majority of these women were very keen to challenge the general misconception -usually amongst Muslims themselvesthat the purpose of their religion is reduced to fulfilling specific duties such as praying or fasting. For them, Islam is a more wholesome religion that extends far beyond the mosque or prayer mat. Being a Muslim is about being a productive member of society, having a strong work ethic, and treating those around you with respect. According to my female informants, such a holistic understanding of religion that encourages one to be better human being is where most Muslims fail, and where there is a vast need to learn from the experiences of other more developed cultures. As a result, the Western world -accessed predominantly through television -was considered to be a rich cultural fountain, and represented an important reference and point of comparison against which local and religious particularities were being routinely measured. In particular, the West's commitment to basic cosmopolitan and humanitarian values such as individuality, democracy and women's rights is something that they respect deeply and wish for in Egypt. As 23 year old Asmaa told me:
The sad reality today is that it is the "unreligious" Western countries which respect and uphold basic human morals and values, while the Muslim world is a shame to us all. We have a lot we can learn from them (Western countries) and therefore as long as you have the right intention, the media represent important tools that generations before us never had, allowing us to engage directly and learn from the model of these more developed cultures, thus always pushing ourselves to become better Muslims. This was illustrated in a long discussion I once had with a group of these informants about the unsatisfactory way a raped woman is dealt with in Egypt. When I probed them in order to discover what had provoked such intense and critical opinions about a matter considered to be taboo in Egypt, I discovered, to my surprise, that it was an episode of the American teen drama 90210 broadcast on the regional MBC4 channel (discussed above) and accompanied by written Arabic translation. In the few weeks prior to our discussion, there was a key storyline where a lead female character was allegedly raped by a school teacher, and this created much interest amongst my female informants. Twenty-two year-old Amany was particularly impressed at how the raped victim within the dramatic scenario was treated respectfully and sensitively by those around her, while in Egypt, she believes:
The girl would have been told to stay quiet as not to lose the reputation of herself and her family. The sad thing is, even though we are a Muslim country, our response is very un-Islamic in its disrespect for the victim. However, by having insight into how other cultures deal with such a situation, we might one day learn to adopt their humility and dignity.
In light of the above, although television only provides women such as Amany with a selective representation of Western culture -usually fictional -it still is a powerful tool allowing them to confidently engage in reflexive cross-national comparisons. Differences are often pointed out between their own tangible and everyday experiences of corruption and dishonesty in Egypt, and between scenes in a film or serial, which they perceive to point to the transparency and integrity of Western culture. This point was confirmed by 21 year old Seham who said that regular exposure to such media often makes her feel 'disappointed and upset' as it discloses the true extent of the dire reality of life in Egypt and the situation of Muslims. Nevertheless she is willing to endure such temporary feelings in order to reap the 'long term benefits' of the media. In her words: 'Without the media we would be closed up on ourselves with no insight into alternative ways of life or what it could mean to be better people and better Muslims.
If this was the case, would we ever have anything to strive towards'? What we can observe so far is a situation where these young women are undergoing a dynamic and imaginative engagement with a mediated Western culture as an attempt to negotiate for themselves a position as worldly, humanitarian and culturally sophisticated Muslims. Importantly, articulating their understandings of the world primarily through the lens of religion means that although lower middle class women accept the West as an important fountain of cultural advancement in many aspects, they simultaneously acknowledge it to be a potential source of immorality and religious laxness. What they have learned of Western culture through the media often confirms to them the 'spiritual ignorance' of Westerners, which has resulted in what they consider to be their excessive materialism, objectification of women and sexual promiscuity. According to 18 year old Nesma, the West may be wealthy and scientifically advanced, but they remain 'spiritually poor' and thus a potential danger of transnational media is that Egyptian youth may learn to be 'headonistic' like Westerners, 'becoming slaves to money and consumer objects rather than a higher divine order'. As a result, Nesma concludes that 'one must equip themselves with strong faith to ensure that they are well aware of their moral boundaries and a sense of what external values are acceptable or not to adopt.' It seems, therefore, that by including the West in a backwardness which involves a disregard for religiosity, these women confidently reverse common perceptions of religious people as being ignorant, stagnant and unprogressive (Elsayed, 2010). Thus, what these female informants display is a hybrid form of cosmopolitanism that blends a fascination with the West with a critical attitude. Although many of these women believe that what they are able to learn about the outside world through the media can help them to be more productive, worldly and sophisticated Muslims, they also consider themselves in a superior position to teach the Western world a vital lesson: the significance of faith and piety. Hence, for these young women, the world does not involve a set of one-way connections from the West to the rest, but is a more complex shared space we all mutually make and influence (Gable, 2010). According to Heba, in a world of open and instant communication, Egyptians and Muslims need to avoid always being passive receivers of what other people choose to send, and instead, should 'strive to become active instigators and senders of their own media messages as this is the best way to educate the world about the beauty and mercy of our religion'. The internet especially was considered to be important in this respect as it enables them to create their own messages -through blogs, websites or tweets -that can then be broadcast uncensored onto millions of other users across the globe. Heba discussed how she volunteers for an English-language website called Islam Online, which aims to promote a modern, youthful and moderate image of Islam. In the context of the above discussion, therefore, while religion acts as a filter as to how these women's cosmopolitan orientations take shape and the contours of its moral boundaries, their religious identity in turn becomes more fluid, adapting and changing in relation to their exposure to the wider world. The end result is a divine cosmopolitanism that is not static, but dynamic and constantly evolving as grounded religious and mediated secular spheres of influence remain in close dialogue and interaction.
---
Working Class Women
My discussions with working-class women revealed a discourse heavily reflective of a strong religious identification that placed great emphasis on the centrality of Islam to their daily lives.
Beyond a verbal assertion of religious devoutness, however, I felt they were not comfortable with me probing too much into the details of Islamic discourse or teachings. Being a Muslim myself, I was able to comfortably talk to both groups about religious matters, and I discovered that unlike the lower middle class, the working class' knowledge of fundamental Islamic teachings was often underdeveloped. Obviously, one's religious knowledge is associated with cultural capital and education. In a context where the majority of these young women have basic literacy and education skills, it should be no surprise that their familiarity with religious texts and their personal knowledge of Islamic discourse is often poor. Thus, it quickly became clear how their relationship to religion is based primarily on teachings and traditions passed down from their parents. In contrast, for the lower middle class, their educational capital has allowed them to comfortably engage with religious texts, so that through their own efforts of increasing religious understanding and perception, they are able to make more informed and reflexive decisions regarding religious practice. Perhaps here it is fitting to use Deeb's (2006) distinction between an 'authenticated Islam ' (2006:21) that persons may experience based on piety and personal understanding, and an unreflexive relationship to Islam underpinned by a conformity to religious folklore and heritage passed down through generations. This premise is succinctly captured by Mahmood's (2005) female interlocutors, who formed part of the Egyptian women's mosque movement she was studying in the 1990s. According to these women, a 'popular religiosity' (Mahmood, 2005: 45) which has become rife amongst ordinary Egyptians has reduced Islamic knowledge to a 'system of abstract values' (ibid.) that functions mainly as a public marker of a socially-desirable 'religio-cultural identity' (ibid.:48) rather than a true and honest realization of 'piety in the entirety of one's life' (ibid.:48). Importantly, I do not aspire to make any judgments regarding which class is more religious or whose faith is more powerful. This is neither my place, nor does it fall within my research aims. What I am trying to say, however, is that while Islam is undeniably central to the lives of both groups, they have developed very different understandings of how religious discourse informs and shapes different aspects of their everyday lives. For the working-class, I observed a strong need to abide by familial expectations and hegemonic social structures that impose Islamic discourse as a strict set of divine values defining the limits of acceptable conduct and physical appearance. In this context, submission to Islamic principles becomes an overbearing moral framework for ensuring inclusion and social conformity and to uphold what their immediate society dictates is a 'respectable' and 'honourable' reputation for women. This is particularly illustrated in the way the veil takes centre stage within working-class locales as a highly visible and public expression of these women's 'embodied piety' and 'well preserved' honour.
As my rapport with these women increased, they often discussed that although their faith is a vital part of their self-identity and the ways they made sense of the world, they still felt fervently bitter at the way their parents very often imposed aspects of religion upon them in a very didactic way without making any effort to actually teach them the fundamental principles of Islam. In this context, I was often told how they felt the media to play a central role in allowing them to undergo a process of self-exploration regarding what it means to be a young Muslim in the modern world. As 22 year old Kariman told me: Women like me are led like sheep -you don't really have much control over your life. Every aspect of your existence is under the spotlight if you're a woman in Egypt-what you wear, how you walk, how you talk to men. The more religious you "appear" to be, the better your reputation will be and thus your marriage opportunities. However, our parents make little effort beyond this to actually educate us about our religion or its main ethos. As I struggle to read a religious book, television makes it much easier for me to increase my knowledge and awareness, especially when there are so many available channels. This quote highlights the significance of television as a cheap and readily accessible medium allowing Egyptians like these young informants, especially those with basic literacy skills, to depend on it for education, information and entertainment. This was confirmed by another participant, Zeinab, who mentioned how she too sees television as a highly informative tool for education that helps broaden her horizons and knowledge about both worldly and religious matters. Zeinab discusses how she turns to religious channels in order to listen to fatwas or the opinions of prominent Islamic scholars on specific issues of importance to her such as praying or giving charity to the poor. However, simultaneously, a large part of Zeinab's viewing practices are also dedicated to regularly watching American movies and sitcoms. Zeinab focused particularly on the fact that although she wears her 'veil with pride' she also wants to be a 'smart, modern and fashionable Muslim woman,' and thus enjoys Western entertainment, particularly movies, as a means to remain in touch with the latest global fashion developments. In her own words: 'I watch and observe and then only take what suits me and complements my identity as a Muslim. My parents believe I'm too heavily influenced by what I see in the media, but I know my boundaries very well'.
Zeinab's underlying assertion that the different values these women are exposed to in the media often create a tension with the existing ideals of the older generation appeared to be a very common sentiment. Adding to this conversation, Mariam argues how women in Egypt only have to read the newspaper or switch on their television to be exposed to stories of women in the Western world taking up important social and political roles as prime ministers, judges and scientists. 'Meanwhile' she continued 'our parents ban us from even talking to men!' (Elsayed,in press: 6). This demonstrates that, for working-class women, a feeling that their conduct is highly controlled by rigid parental expectations can be strengthened through their exposure to the media and an ability to witness alternative representations of gender roles. For Mariam, there is no contradiction between being a pious and devout woman as her religion dictates, whilst also being a 'modern' careerfocused and fashion-conscious woman as is often the norm in the Western societies she observes on screen. The issue, according to Mariam and many of the other women, harks back to parents' narrow and parochial interpretation of religion.
The previous section highlighted how the lower middle class have developed a very reflexive, rational and almost intellectual fusion between religious obligations and a cosmopolitan outlook. As we have seen, the working class experience more of a generational struggle to negotiate for themselves a third space within which they are able to conform to the essential teachings of their religion, yet while also challenging parental expectations through adapting and internalizing cosmopolitan principles they are exposed to in the media. Interestingly, many women in the working class discussed how television -particularly Western programmes -became the source of numerous clashes between them and their parents. As touched on above, this often resulted in foreign channels being encrypted, television viewing being censored by family members, or even as far as TV being banned in the home in a few cases. This generational chasm was confirmed very aptly in a discussion I once had with these women about pre-marital relationships. According to one of the participants, while she sees her parents as occupying a very sheltered, static, and inward-looking existence, she affiliates herself to a 'new' more culturally-mobile generation who, although remain pious, are exposed to the outside world through the media and thus far more versed in contemporary ways of life. Consequently, young women in Egypt have come to formulate very different needs to their parents, particularly demanding love and romance as pre-conditions to marriage. For an older generation, however, who continue to regard wedlock as the only legitimate and permitted form of contact between a man and a woman, dating becomes an immoral 'Western' concept their daughters internalize through an unregulated exposure to media which are at odds with the essential values of their faith and society.
I have argued elsewhere (Elsayed, in press) how transnational media become important catalysts fuelling a generationally-specific 'subcultural imagination' driving these young people to question and subvert hegemonic ideologies at the local level. In acquainting them with the possibility of alternative realities and ways of being: the media allow young Egyptians to develop a reflexive awareness of different sets of moralities informing social roles. Thus, in their encounter with a mediated outside world, young Egyptians' sense of morality and self-righteousness of dominant codes of practice in the nation come to be discussed, addressed and, as we will see, physically challenged (Elsayed,in press: 4).
In essence, for the older generation -represented by parents, societal norms and traditional Islamic scholars -a mediated globalisation often becomes an uncontrollable culprit, synonymous with excessive Westernization, and thus primarily to blame for what they consider youths' lack of attachment to their religion. For these young women living in age of intense mediation and global connectivity, the boundaries between the 'religious' and 'unreligious' are much more fluid and interchangeable, and thus television becomes a vital tool for exploring, defining and negotiating their identity as young Muslims in the 21 st Century. From an adult or outsider perspective it may appear that youth are caught between multiple contradicting cultural or religious repertoires (Nilan and Feixa, 2006). For a generation of technologically-competent and media-savvy youth, the media are naturally embedded within their processes of self-understanding and part of a daily struggle to grasp and make sense of a highly complex, interconnected and rapidly changing world.
---
Conclusion
This paper has explored some of the many and complex ways youth identities -in a Global South context -are being articulated within a world of increased cultural interdependence and highly mediated cross-national connections. As documented by the case of Egyptian women, mediation of everyday life that expands the horizons of their cultural repertoires beyond national space, makes distant systems of meaning relevant to their lives, and to their religious and national identities. In a situation where local affiliations and the particularities of geographical space remain central to these young women's identities, I have demonstrated how religious beliefs and national sentiment are not antithetic to cosmopolitanism. Instead, informed primarily by transnational television, these young women articulate a divine cosmopolitan imagination through which they form multiple allegiances to God, the nation and global culture simultaneously. The multi-layered nature of these young women's identities is captured in the way they display an intricate set of preferences towards the diverse media they have access to. As we have seen, such preferences do not fall neatly within a linear cultural proximity framework (Straubhaar, 1991), which assumes an automatic preference for local and national media.
In a more recent reworking of this theory, Straubhaar and La Pastina (2005) maintain that media preferences must be recognized as more complex, taking place at multiple levels that conform to the different religious, cultural and political aspects that shape people's multilayered identities. As discussed in this paper, young Egyptian women's media preferences centre around the content and values represented by particular genres and programmes, rather than being reduced to the cultural origins of the media. We have seen this in the way secular Western media formats such as American movies become central to how these women negotiate their relationship to religion. In this context, in the articulation of both their cosmopolitan imagination and religious identities, young Egyptians have become skilled negotiators, moving within and between mediated and non-mediated discourses. They move physically within a grounded place that sets the moral boundaries for bodily existence, yet shift subjectively between disembedded spaces of mediated representation, often providing new contexts for meaning and inclusivity. In light of this dialectical interplay between proximity and distance, television, in exposing young Egyptians to representations of different cultural worlds, often provides a sense of detachment from the immediate, although not as a way of transcending the local or the religious, but in providing a new lens and context for imagining and reimagining proximate social experiences. The result, for young Egyptian women, is a divine cosmopolitan imagination.
---
ENDNOTES
| 48,534 | 1,093 |
3dde8d69983cf57901fc423cf84c3c7c2e1243d8 | Socio-demographic and lifestyle factors associated with hypertension in Nigeria: results from a country-wide survey | 2,022 | [
"JournalArticle",
"Review"
] | With the rising prevalence of hypertension, especially in Africa, understanding the dynamics of socio-demographic and lifestyle factors is key in managing hypertension. To address existing gaps in evidence of these factors, this study was carried out. A crosssectional survey using a modified WHO STEPS questionnaire was conducted among 3782 adult Nigerians selected from an urban and a rural community in one state in each of the six Nigerian regions. Among participants, 56.3% were women, 65.8% were married, 52.5% resided in rural areas, and 33.9% had tertiary education. Mean ages (SD) were 53.1 ± 13.6 years and 39.2 ± 15.0 years among hypertensive persons and their normotensive counterparts respectively. On lifestyle, 30.7% had low physical activity, 4.1% consumed tobacco currently, and 35.4% consumed alcohol currently. In comparison to unmarried status, being married (OR = 1.88, 95% CI: 1.41-2.50) or widowed (OR = 1.57, 95% CI: 1.05-2.36) was significantly associated with hypertension, compared with never married. Compared with no formal education, primary ( | INTRODUCTION
There is a high global burden of hypertension with an estimated 1.13 billion people worldwide reported to have hypertension, with most (two-thirds) living in low-and middle-income countries (LMICs) [1]. While in 1990, high systolic blood pressure (BP) was the seventh-leading risk factor by attributable disability-adjusted life-years (DALYs), in 2019, it had become the leading risk factor [2]. The African Region of the World Health Organization (WHO) has the highest prevalence of hypertension (27%) [1]. The increase in LMICs is due mainly to a rise in hypertension risk factors in their populations [1].
Several studies have reported the increasing prevalence of hypertension in Africa [3,4]. Nigeria, as the most populous country in Africa, is also a major contributor to the increasing burden of hypertension in the continent. Between 1995 and 2020, the estimated age-adjusted prevalence of hypertension increased from 8.5% to 32.5% [5]. A recent study also found a similar prevalence of 38% from a nationwide survey in Nigeria [6].
Current evidence shows that gaps in hypertension management were attributable to socio-demographic determinants [7][8][9] and lifestyle factors [10,11]. An earlier study had suggested that demographics and lifestyle variables determined racial differences in hypertension prevalence [12]. Nigeria has a rapidly growing population with increasing urbanization and numerous ethnic groups across the country's different regions. However, in Nigeria, the relationship between socio-demographic/lifestyle factors and hypertension is understudied.
To address the existing gaps in evidence, this study was carried out as part of the Removing the Mask on Hypertension (REMAH) study, a nationwide survey of hypertension aimed at defining the true burden of hypertension in Nigeria. Previously published articles from the REMAH study focused on the study design [13], prevalence of hypertension [6], and prevalence of dyslipidemia [14]. This study intended to assess the socio-demographic and lifestyle factors associated with hypertension in a black population. The findings from this study may be useful for planning interventions and policies to prevent and control hypertension in Nigeria and other similar settings.
---
METHODS
---
Study design
Data were derived from a subset of the REMAH study, a cross-sectional national survey on hypertension. The details of the study design have been reported in a previous study [13]. The study population comprised adults 18 years and older who lived in selected communities. A multi-stage sampling technique was used to select participants from 12 communities across six states of Nigeria. In the first stage, one state was selected from each of the six regions of the country. In the second stage, with the aid of the administrative data of the 2015 general elections of the Independent National Electoral Commission, we selected two local government areas (LGAs) in each state, consisting of urban and rural communities. For urban communities, we selected LGAs in state capitals including Abuja Municipal Area Council for Abuja (North-central), Gombe Municipal for Gombe (North-east), Gusau for Zamfara (North-west), Onitsha for Anambra (Southeast), Uyo for Akwa-Ibom (South-south), and Ibadan-North for Oyo (Southwest). Gwagwalada, Akko, Bungudu, Oyi, Nsit Ubium, and Akinyele LGAs were randomly selected for sampling the rural communities in these states. In the third and fourth stages, one ward from which one polling unit was randomly selected from the rural and urban LGAs. Fieldwork was carried out between March 2017 and February 2018. Out of 4665 adults invited, 4197 consented to participate in the REMAH study; however, only 3782 of them had the required data on socio-demography and lifestyle used for this study. We complied with the Helsinki guidelines for conducting research on human participants, and the study was duly approved by the University of Abuja Teaching Hospital Human Research Ethical Committee.
---
Data collection
Socio-demographic characteristics. Data on various socio-demographic characteristics were collected using an investigator-administered questionnaire. Marital status was grouped into married, unmarried, divorced/ separated, and widowed. The area of residence was either urban or rural. Work status was categorized into government-employed, non-government-employed, self-employed, non-paid, and unemployed. Educational status was classified into no formal education, primary, secondary, and tertiary education.
Lifestyle measures. Trained fieldworkers administered a modified WHO STEPS questionnaire to obtain information on respondents' sociodemographic characteristics, physical activity, tobacco use, and alcohol consumption [15]. Physical activity was assessed using the International Physical Activity Questionnaire which enquired about physical activity during work and leisure. Weekly physical activity was computed by multiplying time spent (in minutes) on a given activity in the reported week by intensity in metabolic equivalents (in MET units) corresponding to that activity: 8 METs for vigorous work or recreational activities; 4 METs for moderate work or recreational activities; and 3 METs for walking activities [16]. The total weekly activity was obtained by totaling the weekly physical activity (expressed in MET-minutes/week) of the three kinds of activities. According to the global recommendation of the WHO on physical activity, respondents had high physical activity if total weekly activity was ≥600 MET-minutes or low physical activity if <600 MET-minutes. Tobacco use was defined as current tobacco use in any form of smoking, snuffing, and ingestion. Alcohol consumption was defined as current consumption of alcohol in any form and quantity.
Blood pressure measurement. Blood pressure was measured by auscultation of the Korotkoff sounds at the non-dominant arm using a mercury sphygmomanometer, as previously described [6]. Participants rested in a seated position for at least five minutes, and observers obtained five consecutive BP readings at 30-60 s intervals. Systolic (phase I) and phase V diastolic BPs were measured to the nearest 2 mmHg. Standard cuffs with a 12 × 24 cm inflatable portion were used. In instances where the upper arm circumference exceeded 31 cm, larger cuffs with 15 × 35 cm bladder were used. A participant's BP was the average of the five consecutive BP measurements. Quality control measures were applied to ensure good quality measurement of BP by training observers to avoid odd readings, consecutive identical readings and zero end-digit preference. At intervals, these parameters were examined and when significant deviations were observed, observers were retrained.
Hypertension was defined according to the 2013 guidelines of the European Society of Hypertension/European Society of Cardiology as systolic BP ≥ 140 mmHg or diastolic BP ≥ 90 mmHg or self-report treatment of hypertension using antihypertensive medications [17].
Data management and statistical analysis. Data were managed and analyzed using SAS software version 9.4 (SAS Institute, Cary, NC). We employed the Kolmogorov-Smirnov test to ascertain the normality of continuous variables. We used mean and standard deviation as measures of central tendencies and dispersion for normally distributed continuous variables. We further analyzed differences between the means of independent binary groups using t-test. Proportions were used to express all categorical variables and the differences between independent groups were analyzed using chi-square. We used logistic regression models to assess the relation of various socio-demographic and lifestyle factors with hypertension. Statistical significance was set at a significance level of p < 0.05.
---
RESULTS
---
Characteristics of study participants
Table 1 summarizes the characteristics of the study participants. Of 3782 participants, 1654 (43.7%) were men and 2128 (56.3%) were women. Majority (2483, 65.8%) of the participants were married, 1985 (52.5%) resided in rural areas, and 1280 (33.9%) had tertiary education. Hypertensive patients were older than their normotensive counterparts. On lifestyle, 1160 (30.7%) of the participants had low physical activity, 156 (4.1%) consumed tobacco while 1340 (35.4%) consumed alcohol. Only 3.2% of the study participants consumed both alcohol and tobacco, 8.1% were physically inactive and consumed alcohol, and 1.0% were physically inactive and consumed tobacco.
---
Association between socio-demographic variables and hypertension
Figure 1 shows the increasing positive association between different age groups and hypertension in women and men. Table 2 shows the association of other socio-demographic variables with hypertension. After adjusting for age and sex, in comparison to unmarried status, being married (OR = 1.88, 95% CI: 1.41-2.50) or widowed (OR = 1.57, 95% CI: 1.05-2.36) were positively associated with hypertension. After stratifying by sex, being married remained significantly associated with hypertension in women (OR = 1.80, 95% CI: 1.19-2.74) and men (OR = 2.14, 95% CI: 1.43-3.21) (Fig. 2). Unemployment/non-paid work was positively associated with hypertension (OR = 1.42, 95% CI: 1.07-1.88) while living in an urban area was not significantly associated with hypertension (OR = 1.11, 95% CI: 0.96-1.28). Compared with no formal education, primary (OR = 1.44, 95% CI: 1.12-1.85), secondary (OR = 1.37, 95% CI: 1.04-1.81), and tertiary education (OR = 2.02, 95% CI: 1.57-2.60) were associated with hypertension.
---
Association between lifestyle variables and hypertension
Table 2 shows the association between lifestyle and hypertension. Low physical activity was associated with hypertension by 23% (OR = 1.23, 95% CI: 1.05-1.42). Also, alcohol consumption was associated with hypertension (OR = 1.18, 95% CI: 1.02-1.37).
---
DISCUSSION
The key findings of our study showed that some sociodemographic and lifestyle factors were associated with hypertension. As age of participants increased, there was increasing association with hypertension. Being married, widowed, unemployed/non-paid, having higher education, low physical activity, and alcohol consumption were significantly associated with hypertension.
Over the years, there has been an increase in the burden of hypertension in Nigeria. A recent systematic review reported an increase from 8.2% in 1990 to 32.5% in 2020 [5]. A previous publication from the REMAH study found the prevalence of hypertension was 38% [6]. Findings from a meta-analysis in Africa showed an estimated prevalence of 57% in an older adult population ≥50 years which may indicate the increasing burden of hypertension with increasing age [3], just as our study noted the increasing association of hypertension with increase in age.
In our study, we found that marital status was associated with the prevalence of hypertension. Being married and widowed increased the odds of having hypertension by 88% and 57% respectively in both men and women. In contrast to previous studies in Iran [18] and Poland [19], it was observed that married men have lower BP than their unmarried counterparts. The authors suggested that married men had better sleep, less stress, better moods and have a more healthy diet compared with unmarried men [18]. The study in Iran reported that married women have higher BP than the unmarried women. It has been reported that married women get stressed from taking care of their families [20].
A recent study in Ghana also explored the association of marital status with hypertension within sub-Saharan Africa [21]. Its findings showed that marital status was an independent risk factor for hypertension in Ghana for women but not for men, after controlling for lifestyle and socio-demographic factors. Our study showed reduced but significant association between marital status and hypertension for both women and men after adjusting for age. Possible explanations for this association among married men and women may be related to the social causation hypothesis. Within the Nigerian context, marriage is seen as an achievement that may influence one's socioeconomic conditions. With improved socio-economic status, there is a tendency towards purchasing foods away from home which are likely to be more of processed foods [22] with the increased risk of hypertension. Also, roles in the marriage could put more pressures on both women and men. In Nigeria, a married woman has to combine work with her domestic responsibilities of catering for her spouse and children [23]. A married man may have to take more responsibility to provide for the needs of his family [24]. All these may contribute to stress that can increase the risk of hypertension, thereby altering the potential emotional benefits of marriage.
Another socio-demographic factor we found to be associated with hypertension was education. Educational attainment is said to be a strong measurable indicator of socio-economic status and it is usually fixed after young adulthood [25]. Previous studies from developed countries have reported that lower education tends to increase the risk of having hypertension [26,27]. These studies found that higher education may influence better awareness of hypertension, dietary and occupational choices. However, our study observed a higher association of tertiary education with hypertension. In the Nigerian context, attaining tertiary education may be linked with better occupational and economic opportunities and the tendencies towards urban lifestyles such as sedentary living, eating unhealthy foods, as well as engaging in more work to pay bills.
Physical inactivity is a growing concern as a risk factor for cardiovascular diseases including hypertension due to increasing Fig. 1 Odds ratio of hypertension by age group in men and women. The x-axis represents age group (in years) while the y-axis shows the odds ratio. The square symbol represents odds ratio for men and the circle for women.
urbanization and the tendency for sedentary lifestyles. We found an association of low physical activity with hypertension in our study. Recent studies continue to emphasize the beneficial effects of physical activity in the prevention and control of hypertension [28][29][30]. The WHO suggests that policies to increase physical activity should aim to ensure that among other measures, walking, cycling and other non-motorized forms of transport are accessible and safe for all [31]. In Nigeria, there is a plan for a national nonmotorized transport policy with focus to improve access for walking and cycling as most Nigerian roads lack walkways, with pedestrians and cyclists sharing the roadway with motorized transport [32]. The current road architecture greatly discourages walking and cycling as forms of physical activity due to the dangers posed by motorized transport. Furthermore, our study reported an association of alcohol consumption with hypertension. It has been well established in the literature that alcohol consumption increases the risk of hypertension. A recent systematic review buttressed that reducing intake of alcohol lowers BP in a dose-dependent pattern [33]. This emphasizes the importance of alcohol policies to reduce alcohol consumption. It has been reported recently that Nigeria has few alcohol-related policies with weak multi-sectoral action and funding constraint for their implementation and enforcement [34]. These policies address the need for limitation of access to alcohol, although, tax increase on alcohol and prohibition of alcohol advertisement were not addressed. With these policy gaps, there is need for more attention on alcohol control by developing a comprehensive policy to regulate its harmful use.
Our findings may be generalised to other countries of sub-Saharan Africa, as most countries within the sub-region are undergoing demographic transition with implications for health. There is an ongoing population increase with an associated increase in the aging population while still having a large young population [35]. Although there is rapid urbanization in most countries within the sub-region, physical infrastructure that encourages physical exercise is lacking in most cities. This, coupled with poor regulation of consumption of alcohol and sugar-sweetened beverages may contribute to fuel the epidemic of hypertension in the region.
One prominent strength of our study is its large sample size with participants recruited from the six regions of Nigeria. Hence, the findings of our study may be used to plan interventions or policies for the prevention and control of hypertension among similar populations. The results of this study should be interpreted within the context of the potential limitations. Our study was a cross-sectional study and hence, the findings do not infer causation in relation to socio-demographic/lifestyle factors and the prevalence of hypertension. A repeat BP measurement after at least two weeks apart would have ensured true diagnosis of hypertension according to the guideline. We, however, averaged five BP readings which may approximate closely to an individual's usual BP. Furthermore, we deployed standardized methodology to ensure good quality of BP measurement throughout the entire period of the survey so as to appropriately identify cases of hypertension. Digital devices may be considered in future studies to improve the quality of BP measurement. Also, some of the variables were assessed through participants' self-reporting and this might have had potentials to bias the findings of this study. Variables such as physical activity, tobacco use and alcohol consumption were prone to self-reporting bias, even though we employed trained research assistants to interview participants. In addition, tobacco use and alcohol consumption were not quantified; quantifying them may have generated dose-response association with hypertension in this study. Another key limitation in our study is the lack of data on participants' income, an important socio-demographic variable. The lack of data on income Fig. 2 Odds ratio of hypertension by marital status in men and women (adjusted for age). The x-axis represents marital status while the y-axis shows the odds ratio. The unshaded bar represents odds ratio for men and the shaded bar for women.
may have limited our findings on the association of socioeconomic status and hypertension as we used education as an indicator of socio-economic status in our study.
---
CONCLUSION
In conclusion, we have reported the socio-demographic and lifestyle factors associated with the prevalence of hypertension in Africa's most populous country. Marriage, education, low physical activity, and alcohol consumption were significantly associated with hypertension. These may be associated with more cases of hypertension presenting to health facilities, with a rising burden of the disease. Hence, there is a need for counselling, health education and policy formulation and implementation targeting these factors to prevent and control hypertension. Nurses and community health extension workers should be trained on counselling in line with the task-sharing policy. Also, the plan for a national non-motorized transport policy in Nigeria with focus to improve access for walking and cycling should be expedited by both federal and state governments. On alcohol consumption, there is need for more attention on alcohol control through development of a comprehensive policy to regulate its harmful use and improve multi-sectoral action and funding for enhanced implementation of policy. Future research efforts include use of religious bodies to raise awareness of hypertension as well as serving as medium for counselling and health education on hypertension. The focus will be on the findings of the study which include marriage, education, physical activity, and alcohol consumption.
---
DATA AVAILABILITY
The dataset used in this study is available from the corresponding author on reasonable request.
---
Summary table
What is known about topic • Socio-demographic and lifestyle factors have been reported to be associated with hypertension in some studies in highincome countries
---
•
Most Nigerian studies focused on prevalence of hypertension at subnational levels or within small populations What this study adds
---
•
We identified a higher prevalence of hypertension among married people and those with higher educational status among adult Nigerians • Low physical activity and alcohol consumption were also associated with hypertension among adult Nigerians
---
AUTHOR CONTRIBUTIONS
ASA was responsible for extracting and analysing data, interpreting results, drafting, revising and approving the final manuscript. BSC was responsible for extracting and analysing data, interpreting results, revising and approving the final manuscript. DN was responsible for interpreting results, revising and approving the final manuscript. JES was responsible for interpreting results, revising and approving the final manuscript. ANO was responsible for extracting and analysing data, interpreting results, revising and approving the final manuscript.
---
COMPETING INTERESTS
The authors declare no competing interests.
---
ETHICAL APPROVAL
The study was duly approved by the University of Abuja Teaching Hospital Human Research Ethical Committee.
---
ADDITIONAL INFORMATION
Correspondence and requests for materials should be addressed to Azuka S. Adeke.
Reprints and permission information is available at http://www.nature.com/ reprints Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 21,799 | 1,073 |
f66689b142dc001563891a81d726f1622ce38a37 | Comorbidity patterns in cardiovascular diseases: the role of life-stage and socioeconomic status | 2,024 | [
"JournalArticle",
"Review"
] | Cardiovascular diseases stand as a prominent global cause of mortality, their intricate origins often entwined with comorbidities and multimorbid conditions. Acknowledging the pivotal roles of age, sex, and social determinants of health in shaping the onset and progression of these diseases, our study delves into the nuanced interplay between life-stage, socioeconomic status, and comorbidity patterns within cardiovascular diseases. Leveraging data from a cross-sectional survey encompassing Mexican adults, we unearth a robust association between these variables and the prevalence of comorbidities linked to cardiovascular conditions. To foster a comprehensive understanding of multimorbidity patterns across diverse lifestages, we scrutinize an extensive dataset comprising 47,377 cases diagnosed with cardiovascular ailments at Mexico's national reference hospital. Extracting sociodemographic details, primary diagnoses prompting hospitalization, and additional conditions identified through ICD-10 codes, we unveil subtle yet significant associations and discuss pertinent specific cases. Our results underscore a noteworthy trend: younger patients of lower socioeconomic status exhibit a heightened likelihood of cardiovascular comorbidities compared to their older counterparts with a higher socioeconomic status. By empowering clinicians to discern non-evident comorbidities, our study aims to refine therapeutic designs. These findings offer profound insights into the intricate interplay among life-stage, socioeconomic status, and comorbidity patterns within cardiovascular diseases. Armed with data-supported approaches that account for these factors, clinical practices stand to be enhanced, and public health policies informed, ultimately advancing the prevention and management of cardiovascular disease in Mexico. |
comorbidities making more challenging to care for them (2). To account for this, the term comorbidity was coined to represent the occurrence of other medical conditions in addition to an index condition of interest (3).
Such comorbidity relationships occur whenever two or more diseases are present in the same individual more often than by chance alone (4,5). Multimorbidity is associated with the risk of premature death, loss of functional capacity, depression, complex drug regimes, psychological distress, declined quality of life, increase in hospitalizations and decreased productivity. This is also linked to economic burden for health-care systems and society (7,6).
There is evidence linking comorbidity with social determinants of health (SDHs) such as cultural issues, social support, housing, demographic environment and SES. Together, these factors, make a source of complexity that creates potential vulnerability for someone facing it, many of them in disavantage for being in socioeconomically deprived areas. The association between SES and prevalence of multimorbidity has been recently established (8,11,10,9,12). Also, these complex phenomena have profound implications for the delivery of high quality care of chronic health conditions and remark the necesity of complex interventions to tackle multimorbidity (2,(13)(14)(15).
Likewise, the non-random co-occurrence of certain diseases differs in different life-stages: the prevalence of multimorbidity increases with age, 60% of the events are reported among 65-74 year olds higher than the prevalence of each individual disease (16,17). Also, the sex is increasingly perceived as a key determinant of multimorbidity in cardiovascular disease (CVD). Although it has been seen as a predominantly men disease, due to men's higher absolute risk compared with women, the relative risk in women of CVD morbidity and mortality is higher increased by some medical factors (diabetes, hypertension, hypercholesterolemia, obesity, chronic kidney, rheumatoid arthritis and other inflammatory joint diseases) (18,19). Moreover, serious multiple chronic conditions are more common in older women (rather than 65) and may limit treatment alternatives (10,20,21).
Multimorbidity presents many challenges, which may at times seem overwhelming. In such a scenario, evidence-based treatment guidelines, designed for single diseases, may lead to serious therapeutic conflicts (1). To circumvent such limitations, a personalized approach to medicine may benefit from the inclusion of ideas from a somewhat recent field of research, generally known as systems biology or network medicine -when applied to humans-. This approach offers the potential to decipher and understand the relationships between comorbidities at a much deeper level by considering coordinated instances (systems) rather than single conditions. A theoretical framework of diseasome has indicated that most human diseases are interdependent. These concept lead to anoher called the human disease network (HDN), a graph in which two diseases are connected if they have in common some biological, genetic, metabolic o even socioeconomic element (1).
The present study is aimed to analyze the patterns of cardiovascular-associated multimorbidity stratified by life-stages, sex and socioeconomic status. Studying such patterns at a large scale will be useful, both to discover trends helpful for public health care planning, as well as to serve as additional clues to understand the complex interaction between genetic/molecular, clinic and social/environmental conditionants of cardiovascular diseases in the different age/sex/SES.
Here, we will expand upon the outcomes derived from various comorbidity networks under consideration. These networks were constructed based on previously outlined criteria, taking into account structural characteristics arising from relationships between pairs of diseases. The focus will be on mutual information (MI) shared by diseases (depending on the frequency of their joint presence, as we will see later), acting as an indicator of co-occurrence between two conditions. This approach offers an additional perspective for discerning comorbidity patterns across diverse networks.
To pinpoint comorbidities potentially linked to sex and/or SES within each network, we will employ the Page Rank Score (PRS), a network indicator of overall influence. This scoring system enables the numerical identification of diseases with greater relevance within each network (1). By factoring in MI between pairs of diseases, the PRS enhances precision, providing richer insights into comorbidity and multimorbidity phenomena in individuals.
The main reserach question that we will aboard in this work is then How does the interplay between life-stage and socioeconomic status influence the comorbidity patterns in cardiovascular diseases among the Mexican metropolitan population, and what are the subtle associations and differences in comorbidity prevalence across age groups and socioeconomic strata?
2 Materials and methods
---
Data acquisition (electronic health records)
The National Institute of Cardiology 'Ignacio Chávez' (NICICH), one of Mexico's National Institutes of Health, is the reference hospital for specialized cardiovascular care in Mexico. The NICICH it is also a third level hospital receiving in-patients with related ailments such as metabolic, inflammatory and systemic diseases, whose treatment may involve immunology, rheumatology, nephrology, and similar specialities in addition to cardiology-related treatments (1).
In this work we used the NICICH Electronic Health Record (EHR) database entries as recorded between January 1, 2011 and June 31, 2019. The EHR-database contains information on socioeconomic factors as well as the main clinical diagnosis that led to hospitalization, it also reports other diseases, disorders, conditions or health problems that the individuals may present. The SES as recorded in the institutional file is a well-defined construct that involves the weighting of variables related to education, employment status, family monetary income, access to public services (water, electricity, drainage) and housing conditions (rural or urban). The EHR management procedures of the institution are set to provide up to five main comorbidities. International Classification of Diseases, tenth revision (ICD-10) was used to identify and clasificate them. The full set of hospital discharged patients, with all types of diagnostics, age, sex and SES were considered in the time period under study, with the exception of those with incomplete information or erroneous coding. The study population included 47, 377 discharged cases. The cardiovascular comorbidities assessed included any disease registered in each case (see Figure 1).
---
Data processing (ICD-10 coding)
Once EHR data has been pre-processed to tabular format, disease and comorbidity relationships could be investigated. Mining, processing and cross-transforming ICD-10 data were performed using the icd (v. 4.0.9) R library (22) (https://www. rdocumentation.org/packages/icd/versions/4.0.9). While ICD codes are increasingly becoming useful tools in the clinical and basic research arenas, their use is not free of caveats and limitations (for a brief dicsussion of some of these in the context of current norms, please refer to the relevant paragrpahs in the discussion section).
---
Statistical analysis
A database of 47, 377 electronic health records (EHRs) was used as this study corpus. Analysis was stratified by age and sex group. Descriptive statistics were used to summarize overall information. The chronic conditions with the highest prevalence, stratified by SES, and the number of chronic conditions associated with each disease were computed.
---
Cohort stratification
For the purpose of statistical and network analysis, patients were stratified based on age and sex. The age groups were defined as follows: the 0-20 years old age bracket has 9,782 individuals (20.65%) of which 4,921 are women and 4,861 men; for the 21-40 years old range there were 6,939 individuals (14.65%) split into 3,593 women and 3,346 men; the 41-60 year old range included 13,690 persons (28.90%) with 5,095 women and 8,595 men; 14,537 (30.68%) individuals conformed the 61-80 years old group with 5,695 women and 8,842 men; lastly the 81 years and older group had 2,429 (5.13%) registered patients 1,187 women and 1,242 men. These strata were used to build the different comorbidity networks that will be presented and discussed later.
---
Cardiovascular comorbidity network (CVCnetworks)
Electronic health records data was processed using in-house developed code (in the R programming language) for the design and analysis of comorbidity networks as previously reported (1). Programming code for this study is available in the following public access repository: https://github.com/CSB-IG/Comorbidity_ Networks. Once the mining of the medical cases was carried out, a set of undirected networks (one network for each age/sex/SES bracket combination, see Subsection 2.4), was built based on the significant co-ocurrent diseases coded according to ICD-10 codes.
Briefly, the origin and destination nodes in these networks are diseases as identified with their respective ICD-10 codes. Subsequently, a link was drawn between these nodes, as long as at least these diseases co-occurr in the same person within this group more often than by chance alone (hypergeometric test, with a False Discovery Rate (FDR) multiple testing correction FDR , 0:05). The strength of the comorbidity association was determined by using the MI calculated for each pair of diseases in the CVCnetworks by using a custom made script (available at https://github.com/CSB-IG/ICD_Comorbidity/blob/main/Disc_ Mut_Info.py) based on the mutual_info_score function of the sklearn.metrics Python package.
---
Network statistics and visualization
In network theory, one of the parameters used to evaluate the connections in the graph is the degree centrality (DC), the total number of links on a node or the sum of the frequencies of the interactions. The degree distribution of a disease is the number of ICD-10 codes associated with that disease. Aside from the node degree, a relevant centrality measure is the PRS (23) that captures the relative influence of a given node in the context of network communication. The Network Analyzer plugin (24) in the Cytoscape open source network analysis suite was used to explore and visualize the network (25) and also the CytoNCA package was used to calculate further network centrality measures (26).
Betweenness centrality (BC) measure is used to assess the relevance of a given condition in the context of a node's influence in global network information flow. Weighted network analytics, PRS calculations and visualization were performed by using Gephi (27). In brief, MI will be used to assess the strength of comorbidity relations (i.e., a higher MI value represents a stronger comorbidity association between two diseased conditions). PRS on the other hand will be used to assess the relevance of a given disease in the context of the comorbidity network given its vicinity (i.e., a higher PRS value represents a higher potential to become a multimorbid condition). In the context of this vicinity, we will often refer to the set of diseases directly connected to a given disease as their comorbidity nearest neighbors (CNNs).
In this study, a double-circle layout visualization was implemented, where nodes were arranged according to their PRS in a counter-clockwise direction, and the top 10 highest ranking diseases were placed in the center of the graph. Nodes were colored on a gradient scale from red (higher closeness centrality) to blue (lower closeness centrality). Additionally, the node size was determined based on their Betweenness Centrality measure, where larger nodes indicated a higher value of Betweenness Centrality.
---
Results
---
Cardiovascular comorbidity networks general results
Comorbidity networks were built for the specific age/sex/SES as previosuly described and a general topological analysis was conducted prior to a detailed analysis of each network. present the main topological features of these networks. By examining the connectivity and structural patterns, significant relationships can be identified, which will be discussed later (A set of tables containing the full connectivity informations for all the networks can be found in the Supplementary Materials). The analysis of the various networks shown in Table 1 revealed that, overall, individuals with low SES exhibited a higher diversity of diseases, reflected in the larger number of nodes, often double or more compared to high SES networks. This phenomenon is attributed to health inequalities arising from constraints faced by this population, making them more susceptible to developing diseases not prevalent in high SES individuals or manifesting and being treated differently due to varying access to necessary resources, ranging from adequate nutrition to healthcare access.
The clustering coefficient showed notable uniformity across all networks, with the network corresponding to men aged 61-80 with low SES exhibiting the highest clustering level. This suggests a higher likelihood of individuals in this network easily manifesting any disease within the network from an initial disease. While this observation doesn't sharply differentiate from other networks, it raises concerns about disease interactions and, consequently, treatment concerning pharmacological interactions.
The higher prevalence of disease diversity in low SES may be associated with the greater density observed in high SES networks. This increased density implies more interconnections among all diseases in high SES networks. However, this doesn't necessarily indicate a higher propensity for comorbidity in high SES individuals, as evidenced by a considerably higher number of connections in low SES, particularly among men over 80.
Furthermore, finding a higher network centralization in graphs for the age range of 0-20 years, with even greater centralization in high SES, may result from comorbidities influenced by factors related to birth. Additionally, there is a lower disease diversity in high SES, while the higher number of diseases in low SES diversifies the conditions centralizing comorbidity relationships.
The average number of comorbidities, measured through the average neighbors in various networks, tends to be higher in older ages compared to young individuals. This trend is more pronounced in men than women. Notably, the decrease in the average number of comorbidities in the population over 80 contradicts existing literature.
A closer analysis of the different CVC networks reveal that there are some pairs of diseases that are prevalent as the most relevant comorbidities in more than one group of age/sex/SES. Some of the most relevant of these pairs of diseases are shown in Figure 2. Understanding disease pairs that transcend demographic (e.g., age/sex/SES) boundaries may help us to provide a holistic view of health challenges and opportunities for intervention. It may contribute to more effective public health strategies and policies that consider the interconnected nature of diseases across diverse populations. Let us examine some of the implications.
---
Cardiovascular comorbidity networks based on socioeconomic status in men and women aged 20 years or younger
Upon visual inspection of the distinct networks depicted in Figure 3, discernible structural differences in their connectivity patterns come to light. A more nuanced examination of the network statistics could provide additional insights into shared features and commonalities. For example, Table 2 highlights the top 5 diseases with the highest comorbidity burden, as indicated by their respective PRS within the specified network. Presence of disease pairs in different stages of life by sex and socioeconomic status.
---
FIGURE 3
Comorbidity networks for patients aged 0-20 years old, for both sexes in the low (LSES) and high (HSES) socioeconomic status. Nodes are ordered according with their Page Rank Score (PRS). High PRS nodes appear in the center. Size and color intensity (red implies higher values, blue lower values) of the nodes is also given by the PRS as a measure of relative importance in the network. Size and color of the edges represent the mutual information weight among disease pairs.
Cruz-Ávila et al. 10.3389/fcvm.2024.1215458
Several commonalities emerge among these highly comorbid diseases. Regardless of sex or SES, the notable presence of Other specified congenital malformations of the heart (Q24.8) and Generalized and unspecified atherosclerosis (I70.9) is observed. Additionally, Other forms of chronic ischemic heart disease (I25.8) appears in three out of four networks, except for men with high SES, where it ranks 9th according to its PRS. A similar scenario unfolds for Atrial septal defect, which moves to the 7th rank in men of low SES. Notably, most highly prevalent and comorbid diseases in this age group exhibit a strong genetic risk component, likely explaining their consistently high rankings across all four networks, irrespective of sex or SES.
Equally noteworthy, albeit for divergent reasons, are instances such as Unspecified cardiac insufficiency (I50.9), which holds a high rank solely among individuals of both sexes with high SES, and Chronic kidney disease, unspecified (N18.9), appearing exclusively in the top 5 for men and women of low SES. The association of unspecified chronic disease with low SES in children and young adults (up to 20 years old) suggests a probable link to environmental factors. Consequently, we opted to investigate its network neighborhood. Intriguingly, robust comorbidity relationships with M32.1 Systemic lupus erythematosus with organ or system involvement were identified in networks corresponding to different age/sex/SES categories.
It is pertinent to note that the documented presence of what has been termed lupus nephritis in children is well-established (29)(30)(31)28). Notably, lupus nephritis can be specifically reported using the ICD-10 code M32.14 Glomerular disease in systemic lupus erythematosus, rather than the more general code M32.1. Nevertheless, the occurrence of juvenile systemic lupus erythematosus (JSLE) has been reported as a more active disease in children and young adults, characterized by faster progression and worse outcomes, including progressive chronic kidney disease, compared to its adult-onset counterpart, leading to poorer long-term survival. Studies indicate that lupus nephritis may affect up to 50%-75% of all children with JSLE. Consequently, analyzing the comorbidity landscapes associated with either concurrent N18.9 and M32.1 (or M32.14) may offer valuable insights for determining optimal diagnostic and therapeutic strategies to enhance patient outcomes.
---
Cardiovascular comorbidity networks based on socioeconomic status in men and women aged 21-40 years
Examination of comorbidity networks for individuals aged 21-40 years, encompassing both sexes and SES, reveals similar trends in highly comorbid conditions as observed in children and young adults (aged 0-21 years). Noteworthy diseases, including Other specified cardiac arrhythmias (I49.8), Other forms of chronic ischemic heart disease (I25.8), Chronic kidney disease, unspecified (N18.9), and Other specified congenital malformations of the heart (Q24.8), consistently rank among the top 5 conditions with high PRS in their respective networks, irrespective of sex or SES (see Table 3 and Figure 4).
It is evident that, up to this age bracket, the most highly morbid conditions are largely shared across different SES. Notably, Other and unspecified atherosclerosis (I70.9), which does not appear in the top 5 for women with high SES in Table 3, is nonetheless ranked 6th in that particular subgroup.
---
Cardiovascular comorbidity networks based on socioeconomic status in men and women aged 41-60 years
In examining the networks for the current age range (depicted in Figure 5), notable diseases consistently rank among the top five TABLE 2 Top 5 diseases with a higher comorbidity burden in networks for men and women patients of low and high SES aged 20 years old or less as well as their PRS value. Comorbidity networks for patients aged 41-60 years old, for both sexes in the low (LSES) and high (HSES) socioeconomic status. Visualization parameters are as in Figure 3. Comorbidity networks for patients aged 21-40 years old, for both sexes in the low (LSES) and high (HSES) socioeconomic status. Visualization parameters are as in Figure 3. Consequently, a more in-depth analysis of these latter two diseases was undertaken.
---
M-LSES W-LSES M-HSES W-HSES
Node
---
Cruz
Regarding Unspecified heart failure (I50.9) in the low SES group, it maintains a substantial position, ranking eighth among both men and women based on their PRS. In contrast, Unspecified chronic kidney disease (N18.9) continues to feature prominently in the high SES group, ranking sixth among men and seventh among women according to their PRS.
As no discernible differences were observed in SES based on the PRS, a first-neighbors analysis was conducted on the diseases listed in Table 4. This analysis considered the MI between pairs of diseases and examined the relationships forming between the diseases mentioned in the table. Notably, the relationship between Unspecified atherosclerosis (I70.9) and Unspecified chronic kidney disease (N18.9), sharing an MI of 0.018057, exhibited a distinction with respect to SES in women networks.
---
Cardiovascular comorbidity networks based on socioeconomic status in men and women aged 61-80 years
In this population, Table 5 highlights consistent representation of the same diseases among the top positions in all graphs, including Other forms of chronic ischemic heart disease (I25.8), Unspecified atherosclerosis (I70.9), Other specified congenital malformations of the heart (Q24.8), and Other specified cardiac arrhythmias (I49.8). Notably, Unspecified chronic kidney disease (N18.9) ranks sixth for high-SES men. Similarly, Unspecified rheumatic diseases of endocardial valve (I09.1) appears in sixth place for low-SES men and women across both strata.
In the analysis of nearest neighbors, it was found that Other specified congenital malformations of the heart (Q24.8), consistently positioned in the networks from early stages of life, is linked to Unspecified rheumatic diseases of endocardial valve (I09.1) exclusively in low-SES men, sharing an MI of 0.015235 and occupying the fifty-second position among the relationships in this population. This phenomenon appears solely in this age range and in low-SES men (see Figure 6). For women, the relationship between Other specified congenital malformations of the heart (Q24.8) and Unspecified rheumatic diseases of endocardial valve (I09.1), absent in the present age range, is evident between 21 and 60 years old, exclusively in the low-SES group.
---
Cardiovascular comorbidity networks based on socioeconomic status in men and women aged 80 and older
In the population aged 80 and older, Table 6 reveals a consistent top three diseases across both sexes and SES (see Figure 7): Other forms of chronic ischemic heart disease (I25.8), Unspecified atherosclerosis (I70.9), and Other specified cardiac arrhythmias (I49.8). These conditions maintain their prominence throughout the lifespan of the study population, alongside Unspecified heart failure (I50.9) and Unspecified chronic kidney disease (N18.9).
Regarding the latter two conditions, it is noteworthy that Unspecified heart failure (I50.9), absent among the top five diseases in the low SES group, ranks ninth for men and seventh for women in this stratum. On the other hand, Unspecified chronic kidney disease (N18.9), exclusive to men in the low SES group, is positioned nineteenth for men in the high SES group. For women, it appears sixth in the low SES group and twelfth in the high SES group according to their PRS. As for the remaining two diseases, they exhibit a relationship that is exclusive to women in the low SES group within the present age range. In this co-occurrence, the pair ranks ninetysixth out of 1,634 disease pairs, with a MI of 0.004605. It is noteworthy that, at the individual level, Acute transmural myocardial infarction of anterior wall (I21.0), which, according to the IPR analysis, is absent in women of high SES and men of low SES, takes the seventh place in the former case and the sixth place in the latter. Conversely, Unspecified rheumatic diseases of endocardium valve (I09.1) holds the thirteenth place of relevance for men of high SES.
The relationship between Acute transmural myocardial infarction of anterior wall (I21.0) and Unspecified rheumatic diseases of endocardium valve (I09.1), observed in previous age ranges, is limited to the low SES group. Specifically, it appears only between the ages of 61 and 80 for men and from 41 up to the present age range for women.
---
Discussion
In this section, we will delve deeper into the outcomes derived from the various comorbidity networks, which were constructed based on the previously described criteria considering the structural characteristics arising from relationships between pairs of diseases. The analysis incorporates mutual information as an indicator of co-occurrence between two diseases, offering a supplementary perspective for discerning comorbidity patterns within these networks. Comorbidity networks for patients aged 61-80 years old, for both sexes in the low (LSES) and high (HSES) socioeconomic status. Visualization parameters are as in Figure 3.
Cruz-Ávila et al. 10.3389/fcvm.2024.1215458
Shared pairs of diseases across several ge/sex/SAS strata (recall Figure 2) may provide information in the following dimensions: The identification of comorbidities potentially associated with sex and/or SES in each network involved the utilization of the Page Rank Score. This numerical measure allows us to pinpoint diseases with greater relevance within each network (1). The PRS enhances precision by considering the MI between pairs of diseases, thereby providing more detailed insights into comorbidity and multimorbidity phenomena in individuals.
---
Comorbidity networks: general observations
In examining the disparities across the analyzed networks, a noteworthy observation emerges: individuals with low SES Comorbidity networks for patients aged 81 years and older, for both sexes in the low (LSES) and high (HSES) socioeconomic status. Visualization parameters are as in Figure 3.
Cruz-Ávila et al. 10.3389/fcvm.2024.1215458
generally exhibit a greater diversity of diseases, often double or more, compared to their high-SES counterparts. This pattern suggests that individuals with low SES face health inequalities, making them more susceptible to a broader spectrum of diseases. These diseases may either not occur in high-SES individuals or manifest differently, influenced by varying access to essential resources for their care-ranging from nutrition to healthcare services (34,33,32). Moreover, the heightened diversity of diseases in the low-SES group may be linked to the greater density observed in high-SES networks. This increased density results in more connections between all diseases in high-SES networks. However, despite the higher number of connections, individuals with low SES exhibit a significantly higher number of comorbidity relationships, as evident from Table 1.
The finding of greater network centralization, particularly in the age range of 0-20 years-accentuated in high SES during these ages -may be attributed to specific comorbidities mediated by factors related to birth. This narrower diversity of diseases in high SES during these ages suggests distinct comorbidity patterns. Additionally, the greater number of diseases in low SES contributes to diversifying the conditions centralizing comorbidity relationships. Factors unique to low SES, such as overcrowding, nutrition, and structural conditions during infancy, expose individuals to different health challenges, potentially leading to varied patterns of comorbidity and multimorbidity in the short or long term (5,36,35).
Similarly, our analysis revealed that the average number of comorbidities, measured by the average number of neighbors in the different networks, is higher in older age groups compared to younger individuals (refer to Table 1). However, this trend is more pronounced in men than in women, as indicated by our results. Notably, in the population aged 80 and older, a decrease in average comorbidities is observed, which contradicts the prevailing literature on multimorbidity phenomena (37,39,38).
Among the most prevalent and clinically relevant conditions in various age groups, we consistently find Chronic kidney disease, unspecified (N18.9), Other specified congenital malformations of the heart (Q24.8), Other specified forms of chronic ischemic heart disease (I25.8), Heart failure, unspecified (I50.9), Unspecified atherosclerosis (I70.9), and Other specified cardiac arrhythmias (I49.8). This prevalence may stem from the interconnected nature of these conditions within the network, where several relationships involve equally significant diseases, influencing various physiological processes (1). However, it is crucial to note that the relevance of some of these conditions within the network may diminish or be absent in certain age groups, contingent on the SES of the patients, as we will discuss later.
This observation that younger patients of low socioeconomic status are more likely to have comorbidities than older subjects of higher SES raises relevant questions. Some of these issues may be related to limited access to healthcare, social determinants of health in early life, nutrition and lifestile factors, environmental factors, educational attainment and healthcare utilization patterns, among other constraints (41,40). Since individuals with lower SES often face barriers in accessing healthcare services, including preventive care and early diagnosis, this may result in undiagnosed or untreated health conditions, contributing to the development of comorbidities. Also, early childhood experiences and SDHs, such as nutrition, access to quality education, and living conditions, significantly influence health outcomes later in life (42,43). Younger patients from low SES backgrounds may have experienced adverse childhood conditions that contribute to the development of health issues and comorbidities. Younger individuals with lower SES may have limited access to healthy food options, leading to dietary habits that increase the risk of conditions such as obesity, diabetes, and cardiovascular diseases (44,45). Living in socioeconomically disadvantaged neighborhoods can expose individuals to environmental factors that contribute to poor health outcomes. Environmental stressors, pollution, and lack of recreational spaces may impact the overall health of younger individuals from low SES backgrounds (46).
In summary, a complex interplay of socioeconomic, environmental, and lifestyle factors contributes to the observation that younger patients of low SES are more likely to have comorbidities. These factors highlight the importance of addressing social determinants of health and implementing interventions that promote health equity and access to comprehensive healthcare services for all individuals, regardless of socioeconomic status. Let us examine these complex comorbidity patterns in more detail.
---
Comorbidity networks in individuals aged 0-20 years
In this initial age range, our analysis of the PRS initially highlighted two diseases that could be associated with low SES. First, Unspecified heart failure (I50.9), listed only among the top five in high SES according to Table 2, was revealed through a deeper analysis to maintain a prominent position in low SES, ranking among the top ten most important. This suggests that it is not exclusive to the high SES population (47).
In contrast to Unspecified heart failure (I50.9), Chronic kidney disease unspecified (N18.9) predominantly affects men with low SES according to our results. There is evidence linking low SES to a predisposition to chronic diseases, including Chronic kidney disease unspecified (N18.9), either directly or as a consequence of preceding chronic diseases, with social determinants of health playing a crucial role (48,49). While further investigation is necessary, factors such as education may be related, suggesting that in this age range, the influence of this factor could stem from the family nucleus where infants and adolescents develop (51,50). Additionally, habits related to nutrition and lack of physical activity can impact the development of conditions closely related to N18.9, such as obesity (52,53), a significant health issue in Mexico from an early age (54,55).
It is worth noting that the presence of Chronic Kidney Disease unspecified (N18.9) in the early years of life is related to congenital malformations and glomerulopathies as the main known causes (57, 56). These conditions may be linked to factors inherent to urbanization, overpopulation, and hygiene, which can negatively impact certain biological processes and increase the risk of developing these diseases (58).
After the above and understanding that being Chronic Kidney Disease unspecified (N18.9) a condition that mostly affects low-SES men in this age range, its comorbidities will also may be specific to this population. Therefore, through an analysis of its first neighbors, we decided to examine its relationship with Systemic Lupus Erythematosus with organ or system involvement (M32.1), since they have a somehow direct relationship (59) and, in turn, it is also a disease that we can more commonly find in low-SES men (60,61) according to our findings.
Given that Chronic Kidney Disease unspecified (N18.9) predominantly affects low-SES men in this age range, its comorbidities are likely specific to this population. Therefore, through an analysis of its first neighbors, we decided to explore its relationship with Systemic Lupus Erythematosus with organ or system involvement (M32.1), as they have a somewhat direct relationship (59). Moreover, it is also a disease more commonly found in low-SES men (60,61) according to our findings.
Regarding this pair of diseases, lupus erythematosus tends to affect various vital organs, and although less frequent in children, it is more severe than in adults, with kidney disease present in 50%-90% of patients. Therefore, the close relationship between these conditions within the network is not surprising. The association of M32.1 with low SES may be attributed to the condition's multifactorial nature, involving genetic and environmental factors. Recurrent infections are also known risk factors for triggering the onset of the disease, with these types of infections more prevalent in families where young children live with school-aged children. Such situations are characteristic of overcrowded environments where multiple families coexist, a scenario more common among individuals with low SES. Premature or low birth weight babies, found more frequently in low-SES settings, are also significant factors in recurrent infections (62).
Conversely, in women, Chronic Kidney Disease unspecified (N18.9) does not exhibit different impacts by SES, according to our data. This suggests that differences in its occurrence by SES may be less pronounced in this population, although further research is needed to confirm this. Additionally, the relationship with Systemic Lupus Erythematosus with organ or system involvement (M32.1) is present irrespective of SES. This may be related to the fact that women are more predisposed to developing M32.1. However, it's essential to consider that both diseases are linked to cardiovascular system issues, influenced by biological and lifestyle factors, aligning with the context of the data from which these networks were modeled. Therefore, other associated variables need to be considered to ascertain the significance of SES in the concurrent occurrence of these conditions in men.
---
Comorbidity networks in individuals aged 21-40 years
In the networks specific to individuals aged 21-40 years, we observe that although the same diseases maintain their top positions according to their Importance Page Rank, the relationships some diseases have with their first neighbors vary. An illustrative example is the case of Other specified congenital malformations of the heart (Q24.8), which, exclusively in low SES for both men and women, retains a first-neighbor relationship with Chronic kidney disease, unspecified (N18.9) and Other forms of chronic ischemic heart disease (I25.8), a relationship suggested in previous literature (64,63).
The multifactorial etiology of Q24.8 as a congenital malformation implies potential influences from genetic and maternal factors during pregnancy, including maternal health conditions such as diabetes, hypertension, and obesity (67,65,66,68,69). On the other hand, N18.9 and I25.8 share common factors associated with chronic diseases, such as unhealthy diets and sedentary lifestyles, believed to be more prevalent in low SES populations (70)(71)(72).
Thus, the shared social determinants of these three diseases in low SES, including the presence of unhealthy habits, a family history of chronic diseases, and limited access to healthcare, could contribute to their co-occurrence in this population. While further research is necessary to confirm this relationship, the current study highlights the significant association between the concurrent occurrence of these diseases and the mutual information they share in their respective networks.
A more specific and noteworthy case pertains to the association between Chronic Kidney Disease, unspecified (N18.9) and Other specified chronic ischemic heart disease (I25.8), observed exclusively in the men network of low SES individuals within this age range. This relationship is characterized by a significant MI score of 0.021196, indicative of its clinical relevance as a comorbidity in this population (74,73). The co-occurrence of these two diseases is anticipated due to the well-established predisposition of kidney disease to cardiovascular conditions, with I25.8 being a notable example. Furthermore, the incidence of I25.8 is known to be ageand sex-related, with a lower likelihood of development among women of childbearing age due to the protective effect of sex hormones (75). In this context, an intermediate disease, Other specified congenital malformations of the heart (Q24.8), could partially explain the observed association between N18.9 and I25.8. However, additional research is necessary to confirm this hypothesis. Notably, the exclusive appearance of this comorbidity in the low SES men network is likely linked to shared structural and lifestyle factors discussed earlier.
---
Comorbidity networks in individuals aged 41-60 years
The most relevant conditions within the networks for high SES remain consistent in the top five positions for both men and women. There are striking similarities in the lower SES, where Chronic kidney disease, unspecified (N18.9) replaces Heart failure, unspecified (I50.9). Nevertheless, the latter remains among the top ten most important conditions (i.e., high Page Rank Score) according to our results. Hence, among the most significant differences observed in this age range, it is notable that men, irrespective of SES, exhibit a first-neighbor relationship among all diseases included in the top five networks for men in this age range (see Table 4). In contrast, women, as per our data, manifest different configurations contingent on their SES. Notably, only a direct relationship between Atherosclerosis, unspecified (I70.9) and Chronic kidney disease, unspecified (N18.9) is evident in women of high SES. This pairing becomes noteworthy because it is the sole age range where a discrepancy surfaces concerning sex and SES. It suggests that solely women of high SES exhibit this comorbidity at earlier ages than those of low SES (76), unlike men who experience this comorbidity in both SES.
Regarding this pair of diseases, it is established that Chronic kidney disease (N18.9) tends to foster the development of cardiovascular diseases, including Atherosclerosis (I70.9), due to deficiencies intrinsic to renal deterioration and its association with the cardiovascular system. This association becomes more prominent in advanced stages of renal disease (78,77), underscoring the close relationship between both conditions. The differentiated appearance in women based on SES, affecting initially women of high SES, is a counterintuitive phenomenon. Typically, chronic diseases are anticipated to emerge earlier or with more substantial impact in low SES populations due to the interplay of various factors inherent in low SES (81,82,80,79). Further research is warranted to comprehensively understand this phenomenon.
---
Comorbidity networks in individuals aged 61-80 years old
Regarding this age range, in general, the diseases that occupy the top positions in each network remain constant, according to their PRS, presenting some changes in terms of their level of relevance according to Table 5. That being said, we analyzed how the most relevant diseases were organized with others, present in the aforementioned table with the highest PRS, which showed us that Rheumatic diseases of endocardial valve unspecified (I09.1) only appears significantly connected to Other specified congenital malformations of heart (Q24.8) in low SES men (83,84). While this association may be expected given that cardiac malformations could contribute to the development of I09.1, the exclusive appearance of this relationship in men of low SES is noteworthy.
In addition to the presence of a cardiac malformation, other risk factors that are associated with I09.1, such as poor oral health and hygiene (85, 86) or injectable drug use (87), may increase the likelihood of co-occurrence between these diseases in individuals of low SES. Studies have linked poor dental hygiene to low SES, which may result from limited access to education or health services (88,90,89). Likewise, low SES has been associated with higher drug consumption, possibly due to factors such as education, family background, place of residence, and social relationships (91). Injectable drug use has been specifically linked to poverty and unemployment, although this relationship requires further investigation (92).
As can be seen in the previous paragraph, the joint presence of the aforementioned conditions, despite being influenced by biological factors, the weight in the occurrence of Unspecified rheumatic diseases of endocardial valve (I09.1) may be falling on the conditions that people go through throughout their development, giving rise to generating comprehensive interventions in the treatment and care of patients with congenital malformations. These interventions should not only focus on medical treatment, but also on the care of vulnerable groups, in order to reduce the gaps in inequality that could be influencing why both conditions affect more people with low SES (84,93).
---
Comorbidity networks in individuals aged 80 years and older
In this population, we found that despite the fact that the most relevant diseases in the different networks remain constant, the relationships they form between them can be different. An example of this is the one presented by Acute transmural anterior wall myocardial infarction (I21.0) and Unspecified rheumatic valve diseases (I09.1), as they only showed a relationship in low-SES women in this age range (95,94,93).
Regarding this pair, the literature generally indicates that Acute transmural anterior wall myocardial infarction (I21.0) is a rare or infrequent complication in patients with Unspecified rheumatic valve diseases (I09.1), occurring in the acute phase of the disease, where coronary embolism is related to bacterial endocarditis, thus causing an acute myocardial infarction (96). This helps us to confirm that there may be biological processes involved in the co-occurrence of these two conditions, but leaves their relationship with low SES up in the air, so it is necessary to continue investigating this topic and also to analyze why it appears more frequently in women since, as mentioned above (97), both diseases have a strong and wellpositioned relationship according to our results. This could become more important if we take into account that mortality from Acute transmural anterior wall myocardial infarction (I21.0) increases directly with people's age (98), and according to our results, both are very well-positioned diseases in the network of women over 80 years old according to their PRS (Table 6).
---
Cardiovascular comorbidity in the context of social determinants of health
Social determinants of health are the conditions in which people are born, grow, live, work, and age, and they play a crucial role in shaping health outcomes. These determinants are influenced by the distribution of money, power, and resources at global, national, and local levels. The main SDHs include socioeconomic stutus, education, employment and working conditions, social support networks, healthcare access and quality, physical environment, social and economic policies, cultural and social norms, early childhood experiences and behavioral factors. Understanding and addressing these social determinants are essential for developing effective public health policies and interventions aimed at improving overall health and reducing health disparities (99).
In the present context, our findings just presented point out to some general trends. More specific issues may be found by using the comorbidity network to navigate on local hospital EHRs that It is relevant however to point out that in this work, statistically significant associations are presented, but no causal or mechanistic explanations have been developed. Rather our study aims to be a starting point to study these, as well as a tool to inform hospital management and public health officials to help them in planning and policy development if possible.
---
Summary of findings
In what follows we will summarize the more relevant, general results observed by examining the comorbidity and multimorbidity patterns. These global (in the context of our analyzed populations) trends may help contextualize the highly variable landscape of cardiovascular comorbidity presented in this study and available at the Supplementary materials (i.e., the whole set of CVC networks).
• Comorbidity networks in people aged 0-20 years old
• Unspecified Heart Failure (I50.9)
• Initially associated with high SES but remains important in low SES. • Indicates it is not exclusive to high SES populations.
• Chronic Kidney Disease Unspecified (N18.9) • Primarily affects low-SES men.
• Linked to low SES through factors like education, nutrition, and lifestyle. • Chronic Kidney Disease Unspecified (N18.9)
• Replaces Unspecified Heart Failure (I50.9) in high SES.
• Key comorbidity with Atherosclerosis (I70.9) in high-SES women.
---
Relation to other studies
Mapping the comorbidity and multimorbidity landscape of cardiovascular diseases has been an issue of interest in the international medical and biomedical research community for some time. Different approaches parallel and complementary to what we have just presented have been developed. These efforts span from the highly specific to the very broad approaches. One quite relevant example of the latter is MorbiNet, a Spanish study that analyzes a very large population consisting of 3,135,948 adult people in Catalonia, Spain. This work also mined EHRs but is focused exclusively in the relationship between common chronic conditions and type 2 diabetes (100). MorbiNet, as the present work is a network-based approach; there the authors build networks from odds-ratio estimates adjusted by age and sex and considered ) and pancreas cancer (OR: 2.4). Though their methods are in some sense similar to ours, there are some noticeable differences. Perhaps the most evident is that due to the large scale of their study they focus on common chronic diseases, somehow regardless of the outcomes and mainly related to one (admittedly extremely important) condition, type 2 diabetes. Also, their networks are unweighted, meaning that every comorbidity relationship above the significance threshold contributes to the comorbidity landcape on a similar fashion, whereas in our case, every comorbidity relationship is characterized by a mutual information value representing the relative strength of this association.
Though not exactly a comorbidity analysis, the framework to study cardiovascular diseases from the standpoint of network science as presented by Lee and coworkers is worth mentioning (101). There, the authors establish a set of basic network theory principles that allowed them to look up to disease-disease interactions, uncovering disease mechanisms, and even allow for clinical risk stratification and biomarker discovery. A simialr approach is sketched by Benincassa and collaborators (102) though the scope is more limited to uncover disease modules.
A hybrid network analytics/classical epidemiology approach is presented by Haug et al. (103). They analyzed multimorbidity patterns, representing groups of included or excluded diseases, delineated the health states of patients in a population-wide analysis spanning 17 years and encompassing 9,000,000 patient histories of hospital diagnoses (a data set provided by the Austrian Federal Ministry for Health, covering all approx. 45,000,000 hospital stays of about 9,000,000 individuals in Austria during the 17 years from 1997 to 2014.). These patterns encapsulate the evolving health trajectories of patients, wherein new diagnoses acquired over time alter their health states. Their study assesses age-and sex-specific risks for patients to acquire specific sets of diseases in the future based on their current health state. The population studied is characterized by 132 distinct multimorbidity patterns. Among elderly patients, three groups of multimorbidity patterns are identified, each associated with low (yearly in-hospital mortality of 0.2%-0.3%), medium (0.3%-1%), and high in-hospital mortality (2%-11%). Combinations of diseases that significantly elevate the risk of transitioning into high-mortality health states in later life are identified. For instance, in men (women) aged 50-59 with diagnoses of diabetes and hypertension, the risk of entering the high-mortality region within one year is elevated by a factor of 1.96 + 0.11 (2.60 + 0.18) compared to all patients of the same age and sex, respectively. This risk increases further to a factor of 2.09 + 0.12 (3.04 + 0.18) if they are additionally diagnosed with metabolic disorders. This study is simialr to ours in the sense that was not limited for particular diagnosis (though only considered 1,074 codes from A00 to N99, grouped into 131 blocks as defined by the WHO, which excludes congenital diseases that are quite relevant for children and young individuals) and it was based on mining ICD-10 codes from the EHRs. Their emphasis however, is different from ours since thay are more interested in patient trajectories which describe the health state of this patient at different points in time, rather than in general trends useful for hospital management.
---
Limitations of the present study
This study utilizes ICD-10 codes to document and classify disease conditions. It is important to note that the use of ICD-10 codes in research presents challenges and limitations, as the system was primarily developed for hospital administration and cost-estimation purposes, rather than as a controlled vocabulary for standardized clinical reporting or epidemiological research (1). Concerns about the suitability of ICD codes for other secondary purposes, such as research or policy interventions, have been raised due to coding errors found in patient data by some authors (105,106,104,107).
The validity of ICD codes to identify specific conditions depends on the extent to which the condition contributes to health service use, as well as the time, place, and method of data collection (108). Diagnostic accuracy tests of ICD-10 codes have been conducted to evaluate features such as sensitivity, specificity, positive predictive values (PPV), and negative predictive values (NPV) for specific major diagnoses, major procedures, minor procedures, ambulatory diagnoses, co-existing conditions, and death status. These studies have generally found good-to-excellent coding quality for ICD-10 codes in these areas (1).
Given these considerations, when using these codes for clinical purposes, careful evaluation is necessary since the actual subjects of interest may not be accurately defined. This may be critical in the assessment of chronic conditions. Moreover, ICD codes perform better with sets of diseases enriched for frequent, well-known conditions.
It is noteworthy that in the specific case of Electronic Health Records in the NICICH, the administrative database coding, archiving, and retrieval procedures have been certified and validated by the World Health Organization (WHO) through the local 'Collaborating Center for WHO International Classification Schemes -Mexico Chapter' (CEMECE, for its Spanish acronym). These procedures are in agreement with ISO 9001:2000, ISO/IEC 27001 certifications, and with the Official Mexican Norm (NOM for its Spanish acronym): NOM-004-SSA3-2012 (1).
---
Concluding remarks
In conclusion, the analysis of comorbidity networks across different age groups and socioeconomic status reveals interesting patterns in disease co-occurrence. There are consistent associations between certain diseases, and these associations may vary based on age and SES. Moreover, the presence of certain comorbidities differ between men and women and across different age and SES, as expected. Some diseases, such as chronic kidney disease and specific cardiac conditions, consistently appear among the most relevant comorbidities across age groups and SES. Additionally, the study highlights specific associations, such as the relationship between unspecified heart failure, chronic kidney disease, and systemic lupus erythematosus with organ involvement, which may have implications for diagnostic and therapeutic strategies.
Notably, the findings also suggest that individuals with low SES tend to exhibit a greater diversity of diseases, potentially indicating disparities in health outcomes and access to healthcare resources. The importance of social determinants of health in shaping comorbidity patterns is evident, emphasizing the need for comprehensive interventions that address not only medical aspects but also social and environmental factors.
Overall, this study provides valuable insights into the complex landscape of comorbidities, shedding light on how age, sex, and SES contribute to the interconnected web of diseases. Further research and ongoing investigation are crucial to deepen our understanding of these relationships and inform more targeted and effective approaches to healthcare and disease prevention.
---
Data availability statement
The data analyzed in this study is subject to the following licenses/restrictions: Data was taken from annonymized Electronic Health Records from the National Institute of Cardiology Ignacio Chavez. Data summaries are available upon request. Requests to access these datasets should be directed to [email protected].
---
Author contributions
EHL conceived the project, EHL and MMG directed and supervised the project, MMG and EHL designed and develop the computational strategy, EHL, MMG, FRA and HACA implemented the code and database search procedures, MMG, HACA, FRA and EHL conducted the calculations and validation, MMG, HACA, FRA and EHL analysed the results. MMG and EHL wrote the manuscript. All authors contributed to the article and approved the submitted version.
---
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
---
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
---
Supplementary material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fcvm.2024. 1215458/full#supplementary-material | 57,283 | 1,833 |
ae89c7dd16a4cd5adb98611a4f4909f1cd94c89e | Socioeconomic inequalities across life and premature mortality from 1971 to 2016: findings from three British birth cohorts born in 1946, 1958 and 1970 | 2,020 | [
"JournalArticle"
] | Introduction Disadvantaged socioeconomic position (SEP) in early and adult life has been repeatedly associated with premature mortality. However, it is unclear whether these inequalities differ across time, nor if they are consistent across different SEP indicators. Methods British birth cohorts born in 1946, 1958 and 1970 were used, and multiple SEP indicators in early and adult life were examined. Deaths were identified via national statistics or notifications. Cox proportional hazard models were used to estimate associations between ridit scored SEP indicators and all-cause mortality risk-from 26 to 43 years (n=40 784), 26 to 58 years (n=35 431) and 26 to 70 years (n=5353). Results More disadvantaged SEP was associated with higher mortality risk-magnitudes of association were similar across cohort and each SEP indicator. For example, HRs (95% CI) from 26 to 43 years comparing lowest to highest paternal social class were 2.74 (1.02 to 7.32) in 1946c, 1.66 (1.03 to 2.69) in 1958c, and 1.94 (1.20 to 3.15) in 1970c. Paternal social class, adult social class and housing tenure were each independently associated with mortality risk. Conclusions Socioeconomic circumstances in early and adult life show persisting associations with premature mortality from 1971 to 2016, reaffirming the need to address socioeconomic factors across life to reduce inequalities in survival to older age. | INTRODUCTION
Previous evidence consistently shows disadvantaged socioeconomic position (SEP) in childhood and adult life is associated with increased premature mortality risk. 1 However, the magnitude of the inequalities is likely context-specific and may therefore change across time. Evidence on these changes in the UK, however, is inconsistent. Inequalities in all-cause mortality by area-level measures of deprivation in adulthood appear to have increased from the 1980s to 2010s in Britain. 2 This contrasts with reports of narrowing inequalities over the same period by educational attainment 3 -observed trends may therefore be sensitive to the specific SEP indicator used.
Existing studies investigating lifetime SEP and mortality associations have typically been limited to older cohorts (born in the 1930s-1950s), nonrepresentative samples, and are limited to single indicators of SEP-with childhood indicators recalled in adulthood. 1 The current study uses three comparable national British birth cohorts -born in 1946, 1958 and 1970-to investigate changes in inequalities in all-cause mortality risk across adulthood and early old age of three generations. The cohorts benefit from multiple prospectively ascertained SEP indicators. Previous evidence has examined the 1946 birth cohort in midlife 4 and found childhood SEP was associated with premature mortality risk. Given persisting inequalities in multiple diseases and other mortality risk factors across the studied period (1971-2016), 4 5 and the persisting inequalities in social and health outcomes in subsequent birth cohorts, 6 7 we hypothesised that inequalities, according to both childhood and adult SEP, in premature mortality would have persisted.
---
METHODS
---
Study design and sample
We used data from three British birth cohort studies, which have reached midadulthood-born in 1946 (MRC National Survey of Health and Development (1946c)), 8 9 1958 (National Child Development Study (1958c)) 10 and 1970 (British Cohort Study (1970c)). 10 These cohorts have been described in detail elsewhere. 6 7 Analyses in the 1946c were weighted as this study consists of a social class-stratified sample. Participants were included in the current analysis if they were alive at age 26 years, had a valid measure of parental and/or own SEP and known vital status and date (from age 26 onwards).
Paternal occupational social class at birth was used in 1958c and 1970c and at age 4 in 1946c (birth data were not used to avoid World War IIrelated misclassification); occupation was classified using the Registrar General's Social Class (RGSC) scale: I (professional), II (managerial and technical), IIIN (skilled non-manual), IIIM (skilled manual), IV (partly skilled) and V (unskilled) occupations. Maternal education collected at birth (1958-1970c)
---
Mortality
Death notifications were supplied from the Office for National Statistics and/or via participants' families during fieldwork. 11 12
---
Statistical analysis
To aid cross-cohort comparisons, analyses were carried out across the following age ranges: 26-43 years (all cohorts), 26-58 years (1946c-1958c) and 26-70 years (1946c).
For each SEP measure, cumulative death rates were calculated for each group. Cox proportional hazard models were used to estimate associations between each SEP indicator and all-cause mortality, following checks that the proportional hazard assumption held by calculating Schoenfeld residuals (online supplemental table S1). Follow-up was from age 26 to date of death-or was censored at date of emigration or at the end of each follow-up period for those still alive (age 43, 58 or 70). To provide single quantifications of inequalities, all SEP indicators were converted to ridit scores, resulting in an estimate of the Relative Index of Inequality. Cohort differences were formally tested using SEP×cohort interaction terms. Models were adjusted for sex, and also conducted separately to examine if findings differed in each sex. To investigate if associations of SEP across life and premature mortality were independent of each other and thus cumulative in nature, (1) mutually adjusted models were conducted including paternal and own social class-and additionally, housing tenure given the suggested importance of wealth 4 ;
(2) a composite lifetime SEP score was used in models, by combining these two or three indicators together and rescaling. 4 Multiple imputation was conducted to address missing data in SEP indicators (N=481 (1946c), N=514 (1958c), N=1236 (1970c)); complete case analyses yielded similar findings. Ten imputed data sets were used. Finally, to investigate if results were similar when examined in the absolute scale, models were repeated using logistic regression (dead/alive at the end of each follow-up period with those who emigrated excluded)-absolute differences in predicted probabilities of mortality were calculated. All analyses were conducted in Stata, version 16.0 (StataCorp LP, College Station, TX, USA). More disadvantaged SEP in both childhood (paternal social class) and early adulthood (education attainment, own social class and housing tenure) was associated with higher mortality risk, with 21 of 24 hours (HRs) being between 1.6 and 3.1 (figure 1). As anticipated, associations were least precisely estimated at 43 years, where there were fewer deaths; and in the 1946c, which is smaller than 1970c and 1958c. Across each age period, HRs were generally larger in 1946c than the two later-born cohorts, but the CIs for 1946c at younger ages were wide (figure 1 and online supplemental table S3; all cohort×SEP interaction term p values were >0.4). For example, HRs of early death from 26 to 43 years comparing most to least disadvantaged paternal social class were 2.74 (95% CI 1.02 to 7.32) in 1946c, 1.66 (95% CI 1.03 to 2.69) in 1958c and 1.94 (95% CI 1.20 to 3.15) in 1970c. Associations were weaker for maternal education as an alternative indicator for childhood SEP, particularly for 1946c (online supplemental table S4). Housing tenure in adulthood was also associated with mortality, renters compared to homeowners had a consistently higher risk of death: HRs from 26 to 43 years were 2.06 (95% CI 1.03 to 4.12) in 1946c, 1.30 (95% CI 0.87 to 1.94) in 1958c and 1.61 (95% CI 1.00 to 2.60) in 1970c.
---
RESULTS
---
In
In models including both paternal and own social class, associations were typically partly attenuated, but generally both variables were still associated with premature mortality. Additionally, there was some evidence that composite lifetime SEP scores had larger magnitudes of association with mortality than each indicator in isolation (particularly in later periods of follow-up; online supplemental table S5a).
Findings were similar when housing tenure was included in models (online supplemental table S5b).
Findings of persistent inequalities in premature mortality across each cohort were also found when examining on the absolute scale (online supplemental tables S6), and when conducted separately among men and women (online supplemental tables S7 and S8). There was suggestive evidence for stronger associations among females in the 1946c and among males in the 1970c.
Figure 1 Associations between socioeconomic position and adult mortality risk: evidence from three British birth cohort studies.
---
Short report
DISCUSSION Despite declining mortality rates across the studied period (1971-2016), inequalities in premature mortality appear to have persisted and were consistently found for multiple SEP indicators in early and adult life. Our findings build on prior investigations which used 1946c but not younger cohorts 4 or repeated follow-up of adult cohorts 1 13 ; and seminal reviews which focus on area-based SEP indicators. 2 14 The persistence of inequalities, even in a period of marked changes to cultural, social, economic and population-wide health (eg, declines in CVD mortality rates) is suggestive of multiple time-depending pathways between SEP and mortality. 15 It is possible that, despite their overlap, each SEP indicator captures different pathways, resulting in their independent associations with mortality. For example, child SEP is associated with many mortality risk factors such as BMI independently of adult SEP, 6 and housing tenure may specifically capture wealth given the increasing value of housing in Britain -wealth is increasingly suggested to be an important healthrelevant SEP indicator. 16 The main causes of death within these cohorts were likely to have been cancers, coronary heart disease and unnatural causes. 17 Strengths of the study include the use of three large nationally representative studies, enabling long-run investigation mortality risk trends, and use of multiple SEP indicators across life. While we use multiple indicators of SEP, they are likely to be underestimates of socioeconomic inequality-wealth for example is only crudely approximated by home ownership, we lack comparable data on income and lacked power to investigate highest attained social class in midlife. Further, while RGSC is widely used in historic samples and official statistics (pre-2000), there is uncertainty in the criteria with which jobs were classified. While there were a small number of participants with missing outcome data, reassuringly the mortality rates in each cohort corresponded with the expected population at the time. 18 Our study was limited to all-cause mortality; however, trends in inequalities may differ by health outcome, for example, absolute inequalities in coronary heart disease appear to have narrowed in 1994-2008, 19 20 but inequalities in stroke remained unchanged. 20 Future studies with larger sample sizes are warranted to investigate trends in cause-specific premature mortality.
Our findings reaffirm needs to address socioeconomic factors in both early and adult life to reduce inequalities in early-mid adulthood mortality. In contemporaneous and future cohorts, inequalities in premature mortality are likely to be significant barriers to a necessary component of healthy ageing: survival into older age.
Twitter Meg Fluharty @MegEliz_.
Contributors MEF, RH and DB were involved in the conception and design of the study; MF conducted the analyses and drafted the manuscript; and MEF, RH, GP, BP and DB revised the manuscript and approved for submission. Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peerreviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/ or omissions arising from translation and adaptation or otherwise.
Open access This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/licenses/by/4.0/. What is already known on this subject ► Disadvantaged socioeconomic position in early and adult life is associated with increased premature mortality risk. Relative inequalities in all-cause mortality by area deprivation have increased from the 1980s to 2010s in England, Wales and Scotland. However, this contrasts with reports of narrowing relative mortality inequalities by educational attainment, and these differences in trends by SEP assessment suggest differences in social stratification according to different measures. Therefore, while there is a known association of SEP with mortality, there is little evidence on how different SEP indicators are associated with mortality risk, and how these associations have changed across time.
---
What this study adds
| 12,365 | 1,399 |
3552d98f72a4d1447af5e05b2569b94906efe6eb | The impact of social support on the health-related quality of life of adult patients with tuberculosis in Harare, Zimbabwe: a cross-sectional survey | 2,018 | [
"JournalArticle",
"Review"
] | Objective: Tuberculosis (TB) is the second prime cause of mortality in Sub-Saharan Africa and remains a major worldwide public health problem. Unfortunately, patients with TB are at risk of poor mental health. However, patients who receive an adequate amount of social support are likely to have improved health outcomes. The study was done to establish how social support influences the health-related quality of life (HRQoL) of patients with TB in Harare, Zimbabwe. Data were collected from 332 TB patients and were analysed through structural equation modelling.The mean age of the participants was 40.1 (SD 12.5) years and most were; males (53%), married (57.8%), educated (97.3%), unemployed (40.7%), stayed with family (74.4%), and reported of less than average levels of income (51.5%). Patients received the most significant amount of social support from the family. Patients also presented with lower HRQoL as they considerably reported of pain, anxiety and depression. The final model accounted for 68.8% of the variance. Despite methodological limitations, the study findings suggest that social support optimises patients' HRQoL. Based on the patients' responses, it was noted that patients presented with lower mental health, therefore, there is a need to develop and implement patient wellness interventions. | Introduction
Tuberculosis (TB) is the second prime cause of mortality in Sub-Saharan Africa and remains a major worldwide public health problem despite the discovery of highly effective drugs and vaccines [1,2]. The HIV/AIDS pandemic further exacerbates the burden of TB. For example, 23.1% of patients diagnosed with HIV/AIDS in Sub-Saharan Africa are reportedly co-infected with TB [1,3]. Unfortunately, patients with TB are at risk of poor mental health and lower health-related quality of life (HRQoL) [4]. For instance, between 40 and 70% of patients with TB suffer from various common mental disorders such as depression and anxiety [4][5][6].
Regrettably, patients with poor mental health are unlikely to adhere to treatment regimens, and this decreases treatment efficacy [6,7]. Further, non-compliance leads to the development of drug-resistant TB which is expensive to treat and has an increased mortality rate [8]. Therefore, poor mental health perpetuates a vicious cycle of adverse health outcomes [5,6]. However, there is established evidence showing that patients who receive an adequate amount of social support (SS) are likely to have optimal mental health outcomes such as lower psychiatric morbidity [9] and increased HRQoL [10]. Social support is defined as the amount of both perceived and actual care received from family, friends and or the community [11]. Furthermore, SS is an essential buffer to adverse life events (e.g. diagnosis of TB), and higher SS leads to increased treatment adherence and improved treatment outcomes [12,13]. Logically, it can be hypothesised therefore that SS may improve HRQoL of patients with adverse life events such as TB. Unfortunately, there is a lack of evidence on the mental health of TB patients residing in low resource settings such as Zimbabwe, yet the burden of the disease is quite high. The present study, therefore, sought to establish how SS influences the HRQOL of patients with TB in Harare, Zimbabwe.
---
Main text
---
Study design, research setting and participants
A descriptive, cross-sectional study was carried out on adult patients with TB in Harare, Zimbabwe. Participants were conveniently recruited from one low-density suburb primary care clinic and two infectious disease hospitals. These three settings were selected as they have the highest catchment of patients with TB of varying socioeconomic status. Applying the following parameters; TB prevalence rate of 28.2% (p = 0.282 and q = 0.718) [2], 95% confidence interval, and expected 10% incomplete records, the minimum sample size according to STA-TISTICA software was 347. We recruited patients; with a confirmed diagnosis of TB according to doctor's notes, aged ≥ 18 years, fluent in either English or Shona (a Zimbabwean native language) and had no other chronic comorbid conditions like HIV/AIDS, among others.
---
Study instruments
Social support and HRQoL were measured using the Multidimensional Scale of Perceived Social Support (MSPSS) and the EQ-5D, respectively. The MSPSS is a 12-item outcome which measures the amount of SS received from family, friends and significant other [14]. The MPSSS-Shona version is rated on a five-point Likert scale with responses ranging from strongly disagree = 1 to strongly agree = 5, and the scores are interpreted, the higher the score, the more significant the SS [15]. The EQ-5D is a generic HRQoL measuring participant' perceived HRQoL in the following five-domains: mobility, self-care, usual activities, pain, and anxiety/depression [16]. The severity of impairments is rated on a three Likert-scale, i.e. no problem, some problem and extreme problem. The responses are log-transformed to give a utility score which ranges from zero to one, a score of one presenting perfect health status. Respondents also rate their health on a linear visual analogue scale which has a score range of 0-100 and the higher the score, the higher the HRQoL [16,17]. The MSPSS and EQ-5D were selected for the present study as they; are standardised, generic outcomes with robust psychometrics, very brief, and have been translated and validated into Shona [14][15][16][17].
---
Procedure
Institutional and ethical approval for the study was granted by the City of Harare Health Council and the Joint Research and Ethics Committee for the University of Zimbabwe, College of Health Sciences & Parirenyatwa Group of Hospitals (Ref: JREC/362/17). This study adhered to the Declaration of Helsinki ethical principles. Participants were approached as they were waiting for services at the respective research sites, and recruitment was done over 4 consecutive weeks. The principal researcher explained the study aims, and interested participants were requested to give written consent before participating. The questionnaires were self-administered to identified participants, and completed questionnaires were collected on the same day.
---
Data analysis and management
Data were entered into Microsoft Excel and analysed using SPSS (version 23), STATISTICA (version 14) and STATA (Version 15). Normality was checked using the Shapiro-Wilk Test and; participants characteristics, EQ-5D and MSPSS outcomes were summarised using descriptive statistics. Correlation co-efficiencies, Chi square/Fishers' exact tests, analysis of variance (ANOVA) and t-tests were used to determine factors influencing patients' social support and HRQoL. Subsequently, patients characteristics (age, marital status, educational level, employment status, perceived financial status and place of residence) and MSPSS and EQ-5D were entered in the structural equation model (SEM) as endogenous and exogenous variables, respectively. The following parameters were set as a minimum criterion for model fit; Likelihood Ratio Chi squared Test (χms 2 )-criteria value p > 0.05, Root Mean Square Error of Approximation (RMSEA)-criteria value ≤ 0.06, Comparative Fit Index (CFI)-criteria value ≥ 0.90, Tucker-Lewis Index (TLI)criteria value ≥ 0.90 and the Standardized Root Mean Square Residual (SRMR)-criteria value ≤ 0.06 [18,19].
---
Results
The mean age of the participants was 40.1 (SD 12.5) years. Most patients were; males (53%), married (57.8%), educated (97.3%), unemployed (40.7%), stayed in highdensity suburbs (46.4%), stayed in rented accommodation (44.9%), stayed with family (74.4%), and reported of less than average levels of income (51.5%). Further, as shown in Table 1, patients received the least and highest amount of social support from friends [(mean 2.8 (SD 1.2)] and family [(Mean 3.7 (SD 1.0)], respectively, and frequencies of MSPSS responses are shown in Additional file 1. Patients considerably reported of pain, anxiety and depression (see Additional file 2 for frequencies of EQ-5D responses), and the mean HRQoL (EQ-5D-VAS) score was 51 (SD 18.1).
The final model (Fig. 1) revealed that patients who received an adequate amount of SS had optimal/greater HRQoL, r = 0.33, p < 0.001. Further; increased age, being unmarried, lower education attainment, lower SES and residing in urban areas were associated with poorer mental health. The model displayed adequate fit, except for the Likelihood ratio, most of the goodness of fit indices were within the acceptable thresholds (see Table 2), and the model accounted for 68.8% of the variance (see Additional file 3).
---
Discussion
The main finding of the present study was that patients who received an adequate amount of social support had optimal/greater HRQoL and this is congruent with previous studies [4,9,20]. However, patients reported lower HRQoL (mean EQ-5D VAS -51 (SD 18.1) when compared to that of healthy urban-dwellers residing in the same research setting who previously reported a mean score of 77.5 (SD 17.4) [17]. The HRQoL outcomes were however almost like those of Zimbabwean patients with HIV/AIDS [21] which demonstrates the impact of long-term conditions on patients' HRQoL. Invariably, pathological process/changes, e.g. persistent coughing, peripheral neuropathy, haemoptysis, fatigue and chest pain, and medication side effects such as excessive tingling sensations have been reported to contribute highly towards lower HRQoL [4,22]. Additionally, external/environmental factors such as cultural beliefs/myths and stigma are also likely to contribute towards depression, lower self-efficacy, and lower emotional well-being which ultimately results in lower HRQoL [1,7,23,24]. Evidence from a systematic review evaluating the HRQoL of South African patients with TB suggests that psycho-social burden, e.g. isolation and stigma dramatically impact patients' HRQoL when The coefficient of determination (SD)
The greater, the better 0.7-good fit compared to the effects of clinical symptoms [25]. This is unfortunate given that stigma precludes patients from receiving an adequate amount of SS [7,23,25]. Several studies concur that patients with more magnificent SS are likely; to promptly initiate diagnosis and treatment [24], comply with treatment regimens [12,13], and have lower psychiatric morbidity [9] which will, in turn, leads to an increased HRQOL [4]. Discrepancies in the amount of SS received from family and friends is suggestive of societal stigma and or cultural influences. For instance, in the African context, it is often the responsibility of the immediate family and spouses to care for a sick relative [1]. This could explain differences in SS sources as most participants were married. Further, the present study also demonstrated the impact of contextual factors on caregivers' mental health as reported elsewhere [5,20,26]. For example, patients who were; educated, formally employed and had higher levels of income had higher levels of SS and HRQoL. Patients with more financial resources are likely to afford specialist support services and likely to use medications with fewer side-effects and are thus likely to have higher HRQoL [27]. This sharply contrasts with more impoverished patients who are likely to develop anxiety and or depression because of financial pressure [24,28]. Malnutrition and non-compliance to treatment regimens, e.g. medication intake, failure to attend scheduled follow-up appointments and lack of funds for purchasing drugs and investigative tests have been previously reported in patients residing in low-resource settings [1,7,24,27,28].
---
Conclusion
The current study suggests that TB patients who receive a higher amount of social support are likely to have higher HRQoL in the Zimbabwean context. Also, given that patients reported lower mental health, there is a need to develop and implement patient wellness interventions. Further studies should utilise longitudinal and qualitative study designs and recruit patients residing in rural areas to understand the mental health of Zimbabwean patients with TB fully. Efforts should also be made to validate mental health outcomes in this population formally.
---
Limitations
Although this the first large-scale study to evaluate the impact of SS on the HRQOL of tuberculosis patients in Zimbabwe the study outcomes need to be interpreted with caution given the following limitations:
• Participants and the research settings were conveniently selected. However, the setting represents the largest catchment areas of patients with TB in Harare. • The duration of TB diagnosis and treatment were not extracted, and these may have influenced the reported mental health. • Participants were only recruited from an urban setting thus outcomes may not be generalisable to all Zimbabwean patients given that more than 67% of Zimbabweans reside in rural areas [29]. • We only recruited participants who were proficient in either English and or Shona languages; Zimbabwe is a multilinguistic country. However, the study instruments were only adapted, translated and validated in the Shona language. • The psychometric properties of the study instruments were not formally tested in patients with TB. • Although we applied SEM, causality could not be inferred given the cross-sectional nature of the data. • Confounding variables such as the length of treatment duration, type of TB, amongst others were not documented, and this may partly account for the 31.2% of the variance which was not explained by the final model.
and is the corresponding author. MC, CT and DM revised and contributed to the drafting/revision of the third and fourth versions of the manuscript in preparation for submission to the journal. All authors read and approved the final manuscript.
---
Availability of data and materials
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
---
Additional files
Additional file 1. Frequencies of responses on the MSPSS, N = 332. Table denotes frequencies of responses on the MSPSS, a 12-item social support outcome measure. Responses are rated on a five-point Likert scale, ranging from strongly disagree = 1 to strongly agree = 5.
---
Additional file 2.
Frequencies of responses on the EQ-5D, N = 332. Table denotes frequencies of responses on the EQ-5D, a generic health-related quality of life measure. Respondents indicate whether they had problems in with self-care, usual activities, mobility, pain/discomfort and anxiety/ depression on a three-adjunct scale. Responses are rated as "no problem", "some problem" and "extreme problem".
---
Additional file 3.
Variance explained by the model. Table denotes the variance accounted by the variables and the total model expressing the relationship between contextual factors, levels of social support and health-related quality of life.
Abbreviations ANOVA: analysis of variance; CFI: Comparative Fit Index; EQ-5D: EuroQol five-dimension scale; HIV/AIDS: human deficiency virus/acquired immunodeficiency syndrome; HRQoL: health-related quality of life; LTI: Tucker-Lewis Index; MSPSS: Multidimensional Scale of Perceived Social Support; RMSEA: Root Mean Square Error of Approximation; SD: standard deviation; SEM: structural equation model; SRMR: Standardized Root Mean Square Residual; SS: social support; TB: tuberculosis.
---
Authors' contributions
CZ, MC, CT and JMD developed the concept and design of the study. CZ collected the data and drafted the first version of the manuscript with the assistance of DM. JMD conducted the data analysis and statistical interpretation, extensively revised the first version of the manuscript, prepared all prerequisite processes for articles submission, submitted the manuscript Author details 1 Department of Rehabilitation, College of Health Sciences, University of Zimbabwe, P.O Box A178, Avondale, Harare, Zimbabwe. 2 Department of Psychiatry, College of Health Sciences, University of Zimbabwe, P.O Box A178, Avondale, Harare, Zimbabwe. 3 School of Health and Rehabilitation Sciences, Faculty of Health Sciences, University of Cape Town Observatory, Cape Town 7700, South Africa. 4 Department of Psychology, University of Cape Town, Rondebosch, Cape Town 7701, South Africa. 5 Department of Physiotherapy, School of Therapeutic Sciences, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa. present manuscript. The manuscript is a product of the manuscript writing and systematic review workshops facilitated by Dr. Helen Jack (Harvard University/Kings College London). Further, the manuscript is also a practical application of the Academic Career Enhancement Series (ACES) program led by Dr. Christopher Merritt (Kings College London). The senior author utilized the skills acquired through the ACES program in both thesis supervision and mentoring of the first author in producing the first draft of the manuscript. Statistical skills learnt from the data analysis workshops by Dr. Lorna Gibson and Professor Helen Weiss (London School of Hygiene and Tropical Medicine) were also fundamental in enhancing the senior authors' statistical analysis and interpretation skills.
---
Competing interests
The authors declare that they have no competing interests.
---
Consent for publication
Not applicable as the manuscript does not contain any data from any individual person.
---
Ethics approval and consent to participate
Ethical approval for the study was granted by the City of Harare Health Department and the Joint Research and Ethics Committee for the University of Zimbabwe, College of Health Sciences & Parirenyatwa Group of Hospitals (Ref: JREC/362/17). Participants were treated as autonomous agents and were requested to sign written consent before participation. Pseudo-names were used to preserve confidentiality, data were stored securely, and only the researchers had access to the information gathered, and participants could voluntarily withdraw from the study at any time without any consequences.
---
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 16,941 | 1,322 |
2790db1078d10d942fce20ed57037227d31d0e28 | Maternal Risk Anxiety in Belfast: Claims, Evaluations, Responses | 2,017 | [
"JournalArticle"
] | This paper considers the social logic of maternal anxiety about risks posed to children in segregated, post-conflict neighbourhoods. Focusing on qualitative research with mothers in Belfast's impoverished and divided inner city, the paper draws on the interactionist perspective in the sociology of emotions to explore the ways in which maternal anxiety drives claims for recognition of good mothering, through orientations to these neighbourhoods. Drawing on Hirschman's model of exit, loyalty and voice types of situated action, the paper examines the relationship between maternal risk anxiety and evaluations of neighbourhood safety. In arguing that emotions are important aspects of claims for social recognition, the paper demonstrates that anxiety provokes efforts to claim status, in this context through the explicit affirmation of non-sectarian mothering. |
How do inner city mothers in Belfast, particularly those raising young children in the first decade of the post-conflict era, negotiate a shifting normative landscape? How do they seek affirmation concerning the quality of their mothering in a changed context? These questions are explored in what follows through a focus on the social logic of maternal anxiety, the evaluative responses it generates (Burkitt 2012;Kemper 1978: 41), and its significance as a guide to the actions of mothers to the neighbourhoods where they live in Belfast's inner city, which continues to be divided and strongly marked by sectarian hostilities, as well as sporadic violence (Shirlow 2008).
This paper aims to contribute to our understanding of the significance of emotions for status claims. In so doing, social emotions, such as anxiety, are treated not as psychic pathologies, or as reflections of the strain of structural imperatives to conform (Barbalet 2001;Hochschild 1979). Instead, they are understood to be important aspects of claims for social recognition, that is, claims for verification of the actor's authoritative status (Crossley 2011;Honneth 1995;McBride 2013), what Weber described as a 'social estimation of honor ' (1948: 186-7).
Emotions are consequently understood as a central feature of agency, providing feedback to the self as a guide to further action (Burke and Stets 2009). Anxiety, for instance, is treated as a sign of insufficient power and status (Kemper 1978: 49), signaling actor unease both over the authority of specific actions (Denzin 2007;Kemper 1978;Lynd 1958), and more generally over the validity of the claim to be recognized as an authoritative actor.
The classed and gendered character of maternal anxiety is examined in what follows through a focus on the relationship between the social dynamics of this emotion in the inner city, and what Hirschman (1970) identified as 'exit, loyalty and voice' types of attitudes, in this case towards segregated, multiply deprived residential neighbourhoods. What follows firstly considers the gendered character of contemporary parenting, and particularly the relationship between motherhood and anxiety, before examining the character of maternal recognition claims, notably through affirming boundaries of respectability and stigma, and through orientations to neighbourhoods.
---
The Social Politics of Motherhood: Norms, Conflict, Anxiety
Parenting remains a strongly gendered practice, with distinct social expectations attached to motherhood and fatherhood (Craig et al. 2014;Rose et al. 2015;Thomas and Hildingsson 2009). Doucet (2015) argues, drawing on feminist debates about the ethics of care, that parental responsibility, understood as a sense of obligation not only to provide practical care, but also to assume a generally attentive and responsive attitude towards those being cared for, remains largely gendered. This is despite changes in how caregiving tasks and time are shared between mothers and fathers (e.g. Kaufman 2013). As she argues, gendered parenting is not simply a matter of equally sharing household and care tasks, but more broadly reflects a 'state of mind', or orientation towards the role of parent, an effect of gender norms.
Mothers continue to be positioned as the primary parent, both in law and in social life, with the consequence that motherhood tends to be associated with distinct emotional dynamics (Hildingsson and Thomas 2013;Warner 2006). Anxiety in particular, an anticipatory emotion reflecting confidence in one's competence as an agent, is a significant aspect of motherhood, an effect of the gendered quality of social power and status (Kemper 1978: 66-7). Indeed, maternal anxiety is the focus of much sociological and psychological research (e.g. Glasheen et al. 2010;Hays 1996;Longhurst 2008;Warner 2006). By contrast, the lack of attention to a phenomenon of 'paternal anxiety' suggests that fatherhood may involve less anticipatory selffeelings and more 'consequent' emotions, those retrospective evaluations of one's specific actions, rather than of one's self (Kemper 1978: 49). This may explain why men are able to opt out of essential care-giving without compromising their sense of themselves as good, involved fathers, (Craig 2006;Rose et al. 2015;Thomas and Hildingsson 2009).
The effort to parent well against this background of gendered role expectations draws mothers in particular into what Scott et al (1998) describe as ever-increasing 'risk anxiety' about their children. This involves endlessly monitoring and responding to perceived threats to safety, especially those posed to one's children from others and the wider environment, including those posed by one's children to other people, including other children (Scott et al. 1998: 689).
Responsibility for assessing and preventing harms to children is increasingly borne by parents, rather than by experts and state agencies, as trust in these institutions has faded (Reich 2014;Warner 2006). Indeed, the effort to protect children from potential harm often focuses on sexual risk, as indicated by the emergence 'stranger danger' education campaigns, as well as the politicisation of child sex abuse (Bell 2002;Lorentzen 2013). As Scott and colleagues argue, while risk of sexual harm to children is actually posed primarily by familiar people rather than strangers, parental anxiety about 'stranger danger', and more recently paedophilia, although disproportionate to the risk, nevertheless does have a social logic (1998: 693). The moral norms associated with parenting tend to generate morally oriented actions as the role is activated in specific situations (Stets and Carter 2012: 124). The presence of unsupervised children in public places does tend to generate moral concerns about the quality of parenting (Wyness 1994). When parental duties are highlighted in this way, it guides the turn towards increased anxiety about child safety and surveillance of children's activities, however apparently disproportionate or irrational. The significance of gender as a source of unequal social status (Ridgeway and Bourg 2004) means that mothers are particularly susceptible to anxiety, as they feel the authority of their actions as competent parents to be continuously in question. Maternal anxieties and fears are neither primal nor entirely personal, but instead reflect the strains and recognition conflicts of the context where they take shape (Robin 2004: 11).
Our interest in feeling at ease and claiming status as competent social actors involves the habitual effort to interpret signs of risk and danger successfully (Bourdieu 1977: 4;Goffman 1971: 249). Goffman argues that feeling at ease depends on being able to read and respond appropriately to relevant social cues, a skill which is only mastered through long-term familiarity with the context (1971: 249). That this sort of competence is more practical than cognitive explains why people tend to feel most at ease not in the most objectively safe places, but in those places where they have greatest experience, where they are able to cope best with the world around them, namely their homes, neighbourhoods, schools and places of work (Warr 1990: 893-5).
This takes on a particular intensity in contexts such as Belfast, marked by a history of intergroup hostility and violence (Shirlow and Murtagh 2006). Despite the peace agreement reached in 1998, and the subsequent establishment of a relatively non-violent society (Mac Ginty and du Toit 2007), segregation remains, and conflicting neighbourhoods are physically separated by 'peace walls' in some instances (Leonard and McKnight 2011). A sense of personal safety in those areas of the city most marked by sectarian violence depends on being able to 'tell' or read a person's ethno-nationality from indicators such as where they are located in public space, as the fear of breaching spatial boundaries tends to support ongoing segregation (Burton 1978). The detailed quality of this spatial segregation, which can change from one end of a street to another, is important in sustaining wider social divisions (Peach 2000). While often not immediately obvious to the casual observer, it nevertheless acts as a crucial, if imperfect, clue about the identity of those in specific places, especially when more overt signals, such as colour-coded clothing referring to the flags of one or other nationality, Irish or British, are missing.
Feeling that one is doing a good job as a mother is not easy in such a context, and is distinct from mothering during times of conflict, where women are often raising children as lone parents, possibly with extended female family members to assist, following the deaths or executions of men. The struggle to simply survive, often in the face of fear, poverty, trauma, poor health and dispossession, tends to define mothering during periods of political conflict (McElroy et al. 2010;Robertson and Duckett 2007). Motherhood also tends to become explicitly politicized during conflicts, symbolizing collective struggle, sacrifice and hopefulness, whether in radical, nationalist or religious terms (e.g. Aretxaga 1997;Peteet 1997;Zaatari 2006).
Mothering in post-conflict situations is distinct from this, as the emphasis turns towards securing long-term stability (e.g. Taylor et al. 2011). Nevertheless, the experience of violence does continue to influence post-conflict mothering (Merrilees et al. 2011). Women currently raising children in Belfast's inner city are no longer doing so in a situation where their husbands, fathers, brothers, boyfriends have been injured, killed or imprisoned. The intensity of wartime collective emotions, especially fear, anger and hatred, have abated to some extent, and the risks associated with sectarian activity, including long-term imprisonment, severe injury or death, have reduced. Thus, fear of possible direct involvement in violence has been replaced by a less focused set of anxieties (Barbalet 2001: 156), such as that one's children may be exposed to or become involved in lower-level sectarian encounters or general anti-social behaviour (Taylor et al. 2011). Anxiety about the potential stigma of being perceived oneself as a bad mother, for instance by raising sectarian or 'anti-social' children, is also not insignificant.
The effort to feel that one is mothering well in post-conflict contexts tends to involve responding with caution to the possibility of outbreaks of inter-group hostility in everyday life, alongside the more 'ordinary' anxieties about threats from reckless motorists and sexual predators. Furthermore, when parents find themselves worrying not about risks from strangers or long-standing adversaries, but about the threat posed by 'anti-social' children and young people in their own neighbourhood, the job of parenting seems yet more difficult and attitudes to the neighbourhood, whether those of loyalty and 'voice', or detatchment and exit, are activated.
---
Responding to Risk Anxiety: Status and Stigma, Exit and Voice
What follows explores the ways in which inner city mothers in Belfast experience and respond to typical anxiety about their ability to protect their young children from risk (Wyness 1994).
These anxieties are somewhat intensified in Belfast by anxiety about sectarianism, particularly for those living in what are referred to as 'interface' areas, that is, those 'locations where Catholics and Protestants live side by side in mutually exclusive social worlds […] in such a way that difference is sustained' (Leonard 2006: 227).
The research focuses on residents of these segregated areas, which are characterised by multiple and high levels of deprivation (Northern Ireland Statistics and Research Agency 2010). The emphasis is on the quality of maternal anxiety and the evaluative responses to the social dynamics of these neighbourhoods that it generates (Kemper 1978: 47).
---
The Study
What follows draws on qualitative interviews with 39 Catholic and Protestant mothers of preschool aged children during 2009-10, living in segregated areas of inner north and east Belfast.
The aim was to examine the everyday urban lives of mothers raising very young children in those areas of the city which had been central to decades of political conflict. The focus on perceptions of urban transformation meant that interviews did not gather detailed information about personal lives or family arrangements, instead concentrated on perceptions of change in the experience of living in and moving about the inner city.
Participants were recruited through voluntary and community organisations, including statesponsored early years support centres, parent and toddler groups, and primary schools.
Respondents were on average aged 26, with two children, at least one of which was of preschool age (under four). Nine respondents combined mothering with paid work, six in parttime and three in full-time employment, typically in community, care, retail or catering jobs.
Research material was gathered by a fieldworker who had both social proximity to and distance from the research context. While she had grown up in another troubled part of the city, her status as a middle-class university researcher generated a social distance which seemed to take priority over that of ethno-nationality. Respondents spoke to her principally as a fellow mother, albeit from a different generation and social class, and took little interest in her ethnonationality.
Multiple methods were adopted to maximise participation. These included non-participant observations, participant-directed photography, and semi-structured interviews with individuals (24), friendship pairs (five) and one group of four friends. Pair and group interviews were carried out with those who indicated a preference for this method, for both ethical and practical reasons. Interviews mostly took place in community settings, often in spaces provided by gatekeepers. The material from interviews is the focus of analysis in what follows.
---
Managing Anxiety: Status and Stigma
I'm always scared of someone coming round and kidnapping them or something.
(Laura) Anxiety about their own and their children's safety was commonly expressed by mothers in interviews. Concerns, typical of many urban contexts, focused on risks from traffic or sexual predators. The specific character of the place added a concern about keeping children away from sectarian rioting and police attention, and, above all, keeping them away from involvement with the 'anti-social' activities of young people in the neighbourhood.
The post-conflict context has changed the quality of these specific risk anxieties. Molly, for instance, a Protestant, worried that her son would be subjected to paramilitary violence from one of the various loyalist organizations, who tend to operate like urban gangs, competing within and between themselves for power in specific areas (Hamill 2011: 140-141): It's not Protestants fighting with Catholics so much anymore, but if you mess with one person in an organisation that's it, you've got them all after you, you know. That would worry me about [my son] growing up … Nevertheless, mothers struggled to accurately evaluate potential threats to their children's safety in a changed, post-conflict context, and so feel that they were good, responsible parents.
The worry that children could be abducted by strangers echoes parental anxiety in other contexts, generated by repeated moral panics (Bell 2002;Pain 2008). Carol's anxiety prompted her to exercise more physical surveillance of their activities than she feels is common in her neighbourhood:
I was born in this area [Catholic, Inner North] and I know a lot of people in it and I do feel safe in it and not as safe as I used to do and I don't feel safe for my children's point of view you know. […] [N]ow they're allowed to play in the street in the summer, but I'm at the door like a stalker. Now I know they are young obviously anyway, but the rest of the parents in the street aren't like that, and in the other streets the kids are out running up and down […] and I am like parked on the doorstep with a mug of tea watching them on their wee bikes, cos I don't feel safe for them with the traffic and I would always be afraid of somebody trying to put them into a car, and drugs is a big issue round here at the moment, which wasn't when I was growing up.
Carol feels herself to be acting differently from her neighbours, who she perceives as practising a 'free range', or what Annette Lareau describes as a 'natural growth', approach to parenting. This affords children a lot of spatial freedom and control over their time, in contrast with the 'concerted cultivation' approach adopted by middle class parents, involving the detailed management of children's time and activities (Lareau 2003: 5). The more intensive approach that Carol adopts is important for her claim to be a good mother, protecting her children from harm as she allows them to play on the street. As her comments suggest, anxiety is a primary driver of her actions as a mother. Dawn's effort to manage her anxiety, so that she can allow her children a degree of independence, is not easy, and depends on setting up supervision networks, so that she can allow them more access to the world beyond the front door, a common parental strategy (Pain et al. 2005). It also depends on making a recognition claim about the quality of her parenting, in comparison with mothers who allow their children to be 'street reared': … I know people will say 'Oh mine go to the park [by themselves]'. Well if my kids need to go to the park, I'll take them myself. 'Oh I just let them go on down', and I say 'Aye, you're just too bloody lazy to take them yourself, that's why'. Now don't get me wrong, that's just my perspective. My kids aren't street reared, by no means. [My italics]
The distinction Dawn draws between her own careful supervision and those mothers whose children who are 'street reared', involves making a stronger claim than Carol for recognition of the 'respectable' quality of her role performance. As McLaughlin has argued, a concern with respectability 'appears to be particularly significant in situations where prestige through occupational attainment is difficult to achieve ' (1993: 563). June similarly drew a distinction between her own area and a neighbouring housing estate, commenting that '[i]t's like kitchen reared here, […] I just think there is a wee bit more decency [in comparison to the housing estate].' June is arguing here that children are reared in their own kitchens in her area learn how to act 'decently', in contrast to the 'street reared' children of the housing estate, where attacks on strangers would not be unheard-of: '… you have to be rough over there, there are stereotypes […] to live up to.' In this way, June and Dawn manage their risk anxiety by claiming recognition for the quality of their mothering, in June's case through direct comparison with the housing estate. This contrast between 'street' and 'kitchen' child-rearing reflects similar distinctions found elsewhere. Mitchell and Green's working class respondents in the North East of England distinguished those children who play 'out the front' of their houses, on or in close proximity to the street, and, more respectably, 'out the back', in a more secured and supervised context (2002: 16). The prevalence of these sorts of status claims is not incidental to the wider politics of parenting. As Skeggs argues, '[r]espectability embodies moral authority: those who are respectable have it, those who are not do not ' (1997: 3). The claim to respectability, moral authority and consequently status here, articulated by Dawn in terms of a more intensive style of mothering than her neighbours seem to employ, and by June as a more 'decent', domestically-focused style characteristic of her neighbourhood rather than specifically of herself, is an important anxiety management strategy which is caught up with broader responses to the 'hidden injuries' of class stigma (Sennett and Cobb 1972); the politicization of unsupervised children in public places; and the privatization of risk (Beck 1992). It isn't surprising then that Dawn's recognition claim depends on affirming this contemporary version of the moral character of parenthood.
Such condemnations of the 'irresponsible' parent, who's 'street reared' children engage in antisocial behaviour, are caught up in what Goffman describes as the two-role process through which the 'stigmatized' and the 'normal' circulate in unrealized ways, prompting actors to either try and align themselves with one or other role, or detach from the situation (1963b: 163-4). That these are interaction roles, rather than simple characteristics of persons, means that participants in a situation can be perceived as performing one or other, regardless of how they might be perceived in other contexts. The potential stigma of being evaluated as a bad, irresponsible mother, as a result of the public behaviour of one's children, is an extremely painful experience, to be avoided if at all possible (Lynd 1958: 64).
Feeling that one is regarded as a good mother appears to require responses to risk anxiety, for instance through surveillance and management of children's social interactions. Laura and Jessica, living in a Protestant area, regretted to some extent the loss of paramilitary control over public order: Years ago you wouldn't have got that much anti-social behaviour so you wouldn't have, but now it's just wild, cos they're getting away with it basically. […] I think it's because all the paramilitaries have died down. Round here would be mainly UDA [Ulster Defence Association] and it's really died down now, where they can't, you know, maybe go and beat people, […] and I think that's why the kids are running about going mad to be honest with you. (Laura) Karen, a Catholic, also worries about parents letting children 'go mad', as she puts it, in this changed context. For her, however, the rise in anti-social activity is caused not by a reduction in paramilitary control, but instead by a loosening of parental supervision in response to the ending of political violence: In Karen's view, mothers have abandoned their duty to protect children, in response to the emergence of peace, resulting in 'mad' anti-social behaviour. For some, anti-social activity has become the primary focus of maternal anxiety, as well as resentment: …you see kids there, teenagers, and they're standing at the corner with joints and that in their hands, cannabis and stuff, and you're [thinking] like 'where's their mummies?' and stuff. Cos like, if my son had done it, like you'd murder [punish] him probably, though it probably does no better. But I would definitely not want him to go down that road, never. (Jade)
The ambiguity Jade expresses about how to actually prevent young people from getting involved in drug taking doesn't detract from her strong conviction that good mothers should do their utmost to keep their children away from such activity, and that the mothers of these young people are failing in their duty to their children, as well as to their community. Parents of 'anti-social' children and young people are very much the focus of resentment and stigma, as they carry the blame for adding to the burden of respectable, responsible parenting in these neighbourhoods.
The condemnation of bad parents is captured in a conversation amongst a small group of mothers whose children all attend the same school in a Protestant area: Kathy … it's different now, they're being cheeky to their own. They're terrorising their own people in their own areas.
---
Vicky
Well my [son] was in bed at the weekend […] and there was two kids from that area out at 3 in the morning, and I think one was thirteen and one was fourteen or fifteen, calling my one to see if he could get out.
---
And one of them couldn't get into his own house… […]
Sharon But where were their parents?
A norm of parental responsibility is strongly affirmed here. While Sharon and Vicky have teenage as well as pre-school-aged children, and are sharply aware of the difficulties of managing their behaviour, the group nevertheless agrees that good parents know where their children are, make every effort to keep them safe, and prevent them from becoming a threat to others. Failure to do this is regarded as injuring the wider community, including other parents, who then must intensify their efforts to prevent their own children from getting involved.
The mothers here agree that parents are primarily responsible, over all other authorities, for the actions of their offspring, a poignant conclusion for Vicky in particular, whose teenage son had died as a result of a drug overdose. Nevertheless, the conversation confirms the normative expectation that the 'good' parent, implicitly the mother, bears responsibility for children's actions and characters. As Goffman reflects, 'stigma processes seem to […] enlist[…] support for society among those who aren't supported by it ' (1963a: 164).
The dynamics of maternal anxiety in the post-conflict era has shifted to some extent, as political and sectarian violence has declined. Fear that children could get caught up in violence appears to have been transformed into more a generalized anxiety about potential exposure to a variety of safety risks. This anxiety can be understood as the 'emotional tone' (Elster 1989: 128) of competing social norms, firstly that children should have more physical freedom than would have been possible during the Troubles, and at the same time, that good parents should protect their children from sexual predators, reckless motorists, and 'anti-social' young people.
Maternal efforts to position themselves as 'normal' by reproducing stigmatising processes, can be understood as an important strategy for responding to anxiety (Goffman 1963b: 163-4), and claiming status recognition.
---
Responding to Anxiety: Exit, Loyalty and Voice
The perception of risks outlined above, particularly of children becoming involved in sectarian and/or anti-social behaviour, tends to result in what Hirschman (1970) famously described as either an effort to 'exit', and find somewhere more 'respectable' to live, or a 'voice' response, whereby those who either have few exit options, and/or who feel loyal to the community, try manage their risk anxiety by engaging in activities intended to improve the quality of social life in the area.
---
Exit
The decision to exit inner-city neighbourhoods was not easy, involving a decisive move away from close-knit, face-to-face communities, where reciprocal social support and solidarity is a vital resource in the face of political conflict and multiple forms of deprivation (Shirlow and Murtagh 2006: 20-21 The effort to do a good job as a mother, against a background of both community loyalty and risk anxiety, plays a crucial role in motivating these mothers' efforts to change the social dynamics of their neighbourhoods. As Kathy commented, 'I think if you're a parent too […] you do want to […] make this place a better place […]. And not everybody wants to help, but there's that handful that say 'Well, we'll change the community', you know?'
---
Conclusion
The analysis presented here contributes to debates in sociology concerning the social significance of emotions, arguing that they do not simply indicate either a form of social conformity, or the strain of such conformity (e.g. Hochschild 2003), but that they are important aspects of interactive struggles for status recognition. Anxiety, understood as an indication of insufficient status and power, can provoke efforts to claim recognition for the former, in this context through the explicit affirmation of non-sectarian mothering. In other contexts which are free from sectarianism and a history of violent political conflict, claims for social status are likely to take a different form. The relative absence of alternative sources of power and status for those inhabiting Belfast's inner city makes non-sectarian mothering an important focus for making these recognition claims.
Consequently, while Belfast mothers in the inner city, like their counterparts elsewhere, worry about risks posed to children from motor traffic passing through residential streets, or from sexual predators, the concern about protecting children and young people from sectarian or anti-social activity is intensely felt. The quality of parenting has become the focus of much attention, as the classed boundaries of respectability are reinforced and resentment builds against those who seem to let their children 'run mad'.
For mothers raising young children in these circumstances, preferences about whether to remain or try to leave are not simply calculated in relation to objective measures of safety and danger. Instead, a sense of community loyalty, combined with the extent to which they feel at ease in these areas and able to claim recognition as good mothers, shapes attitudes to neighbourhoods. As Boal has argued, the combination of exit and voice responses to the social dynamics of residential segregation does reinforce social homogeneity (1976: 71). Although the 'voice' responses of women in this study tend to focus on non-sectarian mothering, for example by reducing the bitterness and hardness of inter-group attitudes, this may contribute towards softening the boundaries between communities. At the same time, a hardening in normative expectations concerning the moral duties of mothers is evident, not least through condemnations of those whose children grow up on the streets, rather than in their mothers' kitchens.
Recognition (Palgrave Macmillan, 2012), and Abortion and Nation: The Politics of Reproduction in Contemporary Ireland (Ashgate, 2005). She has also worked on the social politics of breastfeeding, abortion and sex education, as well as motherhood and social change.
---
sectarianism, for instance by sending her youngest son to a mixed play-scheme, she is doubtful that this will be enough to keep him, or his older sibling, safe from involvement. She views the cross-community contact that the children's play-scheme provides as valuable. However, she seems unsure that 'contact' schemes such as this, which aim to resolve inter-group tensions by bringing people from each side together in an organised way, offer a long-term solution, despite common claims that they do (see, e.g., Amir 1969;Hewstone and Brown 1986).
In her effort to feel that she is mothering well, anxiety about sectarianism has taken priority.
An important factor in Kylie's decision to exit is the continued presence of her ex-partner and his family in the area, which has reduced her sense of status and consequently her loyalty to the neighbourhood, making a move away more feasible by lowering the emotional costs of exit (Hirschman 1970: 78).
Kylie's plan to exit the inner city was not common amongst the mothers in this study. The strong sense of belonging to an urban village, characterised by close and frequent interaction and support, particularly with extended family members, constituted a strong incentive to remain, despite commonly expressed risk anxiety. As Hirschman put it, 'loyalty holds exit at bay and activates voice ' (1970: 78). Consequently, mothers commonly sought to exercise 'voice' to influence the social character of their neighbourhoods, an important way of claiming recognition for their responsible mothering. This may explain why Kathy claimed that 'the women are the voice of the community really'.
---
Loyalty
June had a similar attitude and sought to improve the quality of life in the inner city in quite direct ways. She had moved away from Northern Ireland as a young woman: I remember thinking [that] if I had children I didn't want them here, cos […] you were forced to join all these paramilitary [youth organisations], […] every other child used to be involved in it and you either had to become a Christian or get out of the country to get out of it.
However, she had returned to live and raise her young family in the post-conflict era, and, among other things, had become involved in setting up a cross-community play scheme for toddlers in her area:
We have people coming in every week and when people get to know this, that it isn't that big [paramilitary mural] on the wall, […] whatever. It isn't that kind of hardness in the toddler group. [my italics] While somewhat surprised that women from diverse backgrounds were not put off by the prominent paramilitary murals on the external walls of the playgroup building, she nevertheless affirms the possibility of reducing the 'hardness' of sectarian attitudes through running such schemes for very young children.
The second focus of 'voice' amongst these mothers was on local anti-social activity. During interviews, various efforts to provide young people with places to go and activities to get
---
Author Biography
Lisa Smyth is a Senior Lecturer in Sociology at Queen's University Belfast. Her research focuses on the normative and interactive quality of social status, with a particular focus on gender and families. She is the author of The Demands of Motherhood: Agents, Roles and | 33,369 | 865 |
b36162cba913d6a85826aaded979df566eee218b | Current state and the support system of athlete wellbeing in Japan: The perspectives of the university student-athletes | 2,022 | [
"JournalArticle",
"Review"
] | systems and improve information accessibility. Given that this pilot study's validity, reliability, and feasibility were verified, further studies should focus more on the wellbeing of Japanese elite athletes in high-performance sports (i.e., Olympic and Paralympic athletes). |
The optimization of athletes' wellbeing has been increasingly considered essential both in the academic and practical fields of high-performance sports. Various organizations, such as the International Olympic Committee, have highlighted its importance, particularly mental health. Moreover, the increased attention to athlete wellbeing in sport policy debates at the national level has led to the development and implementation of a support system for athletes' mental wellbeing in some countries. Nevertheless, the literature is limited to understanding the case of Japan. Interestingly, only 0.8% of the literature is available on "athlete" and "wellbeing" in Japanese compared to English journals up to 2019. Therefore, the purpose of this study was to identify (a) the current state of wellbeing of Japanese university studentathletes, (b) the level of knowledge about athlete wellbeing, and (c) the athletes' perception of the availability of wellbeing support in the national sports federations, (d) the athlete experience of support services, and develop the types of national support athletes expect and need from the government and national sports federations in the future. As a pilot study, a total of 100 Japanese university student-athletes (43 male, 57 female) from 17 Olympic and seven Paralympic sports completed an online survey. Consequently, the state of their wellbeing was self-perceived as good in all dimensions (i.e., physical, mental, educational, organizational, social, and financial). Moreover, the results showed low recognition of the term "athlete wellbeing" and a lack of knowledge of the availability and accessibility of appropriate support services. The results also showed that Japanese university student-athletes rarely seek help from experts, while 45% indicated "no one" to talk to. Interestingly, however, most athletes considered each dimension of wellbeing important in relation to their performance development. Based on the results, it is necessary to develop an education program, guidelines, and detection
---
Introduction
The optimal and holistic development as a human being is considered important for athletes to achieve their maximum potential both in performance and life after their athletic careers (Wylleman, 2019). Although participation in sports and physical activity benefits one's health and mental wellbeing in many ways (Biddle et al., 2015), pursuing excellence in high-performance sports is associated with various factors that may pose threats to the holistic wellbeing of athletes (MacAuley, 2012;Gouttebarge et al., 2019;Giles et al., 2020).
Given those risks in a highly competitive environment, optimizing athlete wellbeing, particularly mental health, has received considerable attention in high-performance sports academic, political, and practical fields. The increase in interest might be triggered by some high-profile athletes openly and publicly discussing their challenges with mental health and wellbeing (Heaney, 2021). In the period between 2018 and 2020, several sporting organizations published consensus statements on athletes' mental health, including the International Olympic Committee (IOC) (Moesch et al., 2018;Schinke et al., 2018;Gorczynski et al., 2019;Reardon et al., 2019;Van Slingerland et al., 2019;Henriksen et al., 2020).
At the same time, several national governments and sports organizations have conducted investigations and developed policies to guide the nation to promote and support athletes' mental wellbeing at a system level (Canadian Olympic Committee, 2015; Department for Digital, Culture, Media and Sport, 2018; English Institute of Sport, 2019; Australian Institute of Sport, 2020;High Performance Sport New Zealand, 2021). To operationalize the policy into practice, some leading countries have launched teams responsible for establishing and implementing the national support system and programs, mostly at the high-performance sports centers, as an integral part of athlete development. Those support frameworks appeared to include some of the common approaches proposed by Purcell et al. (2019): (a) providing support for athletes to equip them with a range of skills to self-manage distress, (b) educating key stakeholders (e.g., coaches, science and medicine practitioners, support service providers, etc.) in a highperformance environment to better understand and respond to symptoms regarding mental health and wellbeing, and (c) establishing multi-disciplinary teams and/or professionals to better support and manage prevention and reaction to athletes' problems with mental health and wellbeing.
Despite mounting literature and practical implementation of policies to support athlete wellbeing, there are several limitations associated with athlete wellbeing. First, the majority of research have focused on athletes' physical and psychological/mental wellbeing, even in the last 2 years (Biggins et al., 2020;Schary and Lundqvist, 2021;Jovanovic et al., 2022). Thus, there is little to know about the athlete's wellbeing from a holistic perspective. Furthermore, Giles et al. (2020) argued that evidence-based intervention in athlete wellbeing is limited due to methodological and conceptual issues. Lundqvist (2011) also claimed that "wellbeing is treated as an unspecific variable, inconsistently defined and assessed using a variety of theoretically questionable indicators (p.118)." These methodological and conceptual issues associated with athlete wellbeing, therefore, make it difficult to carry out evidence-based interventions in practice (Giles et al., 2020). Moreover, despite most studies having been conducted in Western countries, there is still little information available about other regions, including Asia (Reardon et al., 2019). Additional research, therefore, contributes to knowledge in this area, particularly in developing the support policy and framework that could be operationalized in practice.
Japan earned 27 gold medals and 58 total medals in the Tokyo 2020 Olympic Games, placing them in the top three nations for gold medals, which were the best results ever. Since the development of sports has become the responsibility of the government due to the enactment of the Basic Act on Sport in 2011 (Ministry of Education, Culture, Sports, Science and Technology, 2011), the landscape of Japanese high-performance sports has dramatically changed at all levels, such as policies, systems, the structure, and programs. However, there had been little discussion about athlete mental health and/or wellbeing until the COVID-19 pandemic struck, resulting in the Tokyo 2020 Games being postponed by 1 year. In fact, Kinugasa et al. (2021) reported that only 14 articles were available on "athlete" and "wellbeing" in the Japanese language; it was only 0.8% of those in English journals up to 2019. However, gradually more focus is being directed toward athletes' mental health, that is, a state of mental wellbeing. For example, Tsuchiya et al. (2021) argued the need for support for athletes' mental health by reporting the positive correlation with a psychological stress response to COVID-19.
To contribute to Evidence-Based Policy Making (EBPM) in the high-performance sports field, the Japan Sport Council (JSC) launched a new research group in social sciences at the Japan Institute of Sports Sciences (JISS), a part of the Japan High Performance Sport Center (HPSC) (Kukidome and Noguchi, 2020). Given the limited evidence available in the field of athlete wellbeing in Japan, the group initiated the research project to provide some evidence to support the policy development into operationalization in Japan-that is, a pilot study with university student-athletes aiming to reveal (a) the current state of wellbeing of Japanese university student-athletes, (b) the level of knowledge about athlete wellbeing, (c) the student-athletes' perception of the availability of wellbeing support in the national sports federations, and (d) the student-athletes experience of support services on wellbeing, and develop the types of national support student-athletes expect and need from the government and national sports federations in the future.
---
Materials and methods
---
Participants
The participants for the pilot study included 100 Japanese university student-athletes (43 male, 57 female) aged from 20 to 25 years (M = 21.3, SD = 1.2). The sample was limited to student-athletes who attend either undergraduate or postgraduate programs and belonged to the university's Athletic Department, participating in sports in an official event of the Tokyo Olympic and Paralympic Games 2020. The participants represented 18 Olympics (baseball and softball, basketball, athletics, volleyball, football, badminton, tennis, swimming, table tennis, archery, handball, judo, rhythmic gymnastics, rugby sevens, artistic gymnastics, karate, surfing, and water polo), and seven Paralympic sports (para-table tennis, para-badminton, para-swimming, para-archery, boccia, para-athletics, and para-judo). The participants were grouped into two categories: "elite" for those who have competed in international competitions representing Japan, including five serial medalists (36.0%), and "sub-elite" for the rest (64.0%). 11% of the participants were carded athletes in national (n = 1), senior (n = 4), youth (n = 3), and junior (n = 1) categories for less than 1 year (33.3%), 1-3 years (44.4%), and 4-6 years (22.2%).
---
Measures
Given that this pilot study was specifically designed for the initial investigation to capture the general trends of student-athlete wellbeing in Japan with the aim of providing evidence for developing the support system within the country, the instrument was self-developed in the Japanese language.
To maintain the holistic nature of wellbeing, we developed the instrument in accordance with the Holistic Athlete Career Model (Wylleman, 2019). To validate this 48-item instrument, we used the Delphi method (Hsu and Sanford, 2007) by eight psychologists and social scientists with an excellent understanding of athlete wellbeing. The instrument was resurveyed until the experts reached a consensus (100% agreement by the eight experts), and the content validity and feasibility of the instrument were verified through this process. The reliability of the instrument was tested by administering the same instrument twice to the same 38 respondents, the participants within 1 week, and calculating the intraclasscorrelation coefficient (ICC). Test-retest reliability of the instrument was found to be good (r = 0.7 ± 0.3) (Hopkins, 2000).
---
Demographic information
The measurement consisted of 11 items to gather demographic information about the participants. These items included gender, age, place of living, working/educational status, sport type, the number of years played in their main sport, organizational type, carded category, the number of years played in their carded category, and the best performance record in their sport.
---
Awareness of and state of athletes' wellbeing
As "athlete wellbeing" is a relatively new concept in Japan, one item was included to understand the level of awareness in student-athletes. In addition, it comprised seven Likert-scale items to measure the state of wellbeing in each dimension (i.e., physical health, psychological health, balance with education/and or work, interpersonal relationships, organizational environment, financial security and stability, and legal security and safety). Their states of wellbeing in each dimension were asked over the past 3 years to account for COVID-19 spread mostly in 2020 in Japan, and a 5-point scale was used in most of the items (e.g., 1 = very good, 2 = somewhat good, 3 = not so good, 4 = not good at all, 5 = not sure). Furthermore, in order to take the degree of influence of COVID-19 into consideration, another seven items were added (e.g., Does COVID-19 have more influence on your wellbeing than usual before the pandemic?).
---
Influence and importance of wellbeing in relation to athlete performance
A total of 12 items were included in the instrument to reveal the perspectives of student-athletes on how much each dimension of wellbeing would influence performance and how important they perceived a state of wellbeing in their performance development. Those items scaled from 1 (very much) to 5 (not at all).
---
Availability, experience, and expectation of support services
Two items were specified to collect information about the availability of guidelines and support programs on athlete wellbeing and/or mental health within the national sports federation. Furthermore, a total of 25 items were prepared in order to investigate the student-athletes' experience of receiving support services in relation to their wellbeing. In contrast, one item was added to identify the level of expectation for developing national support services by the government and/or national sports federations. Those items were developed with the perspectives on general service provision in relation to information, detection, proactive and/or reactive support service, tools, and networking.
---
Life satisfaction
The overall satisfaction with life scores from the national wellbeing and quality of life survey were taken on an 11-point scale from 0 (not satisfied at all) to 10 (very satisfied) to compare the participants' scores with the general population in Japan (Cabinet Office, 2018).
---
Procedures
Ethical approval for this study was granted by the authors' sports science institute ethical review committee (Reference #047) in accordance with the Declaration of Helsinki. A written informed consent form describing the aim, methods, risks associated with participation, confidentiality considerations, and data ownership and management methods of the study was provided to the student-athletes before the participants filled out the web-based questionnaire. They could withdraw from participation at any time, even after they have agreed to participate in the study. After we obtained informed consent from the participants, they completed the survey using the web-based questionnaire system (Tokyo: Cross Marketing Group Inc.), taking approximately 15-20 min on a confidential and voluntary basis. The survey was conducted from February to March 2021.
---
Analysis
The Chi-square tests were used to determine the presence and magnitude of deviations away from expected distributions, and the significance level α was set at 0.05. Correlation analysis was applied to identify the relationship between the items with the following thresholds: < 0.1, trivial; 0.1-0.3, small; 0.3-0.5, moderate; 0.5-0.7, large; 0.7-0.9, very large; and 0.9-1.0, almost perfect (Hopkins et al., 2009). The Statistical Package for Social Science (SPSS) for Windows version 27 (Armonk, NY: IBM Corp.) was used for this analysis. A Welch's t-test was conducted for group comparison using RStudio statistical computing software version 1.4.1717 (Boston, MA: RStudio), and the significance level α was set at 0.05. Uncertainty in true (population) effects values was expressed as 90% confidence limits.
---
Results
The current state of student-athlete wellbeing
The state of the participants' wellbeing in the past 3 years was perceived as somewhat good in all physical (M = 1.91, SD = 0.94), mental (M = 2.05, SD = 0.97), educational (M = 2.10, SD = 1.07), organizational (M = 2.42, SD = 1.26), social (M = 2.06, SD = 1.04), financial (M = 2.19, SD = 1.06), and legal (M = 2.53, SD = 1.36) dimensions. Among the seven dimensions of wellbeing, the participants self-evaluated their legal wellbeing as the highest, indicating "relatively not good." In contrast, physical wellbeing at the lowest indicated "relatively good." The results of the correlation analysis showed that the overall satisfaction with life scores and the seven dimensions of wellbeing were insignificant (p > 0.05). However, the relationship between the overall satisfaction with life and wellbeing scores between the groups showed some significant differences (Table 1). In particular, moderate and small positive correlations were observed between the overall satisfaction with life and wellbeing scores in organizational and financial dimensions only in the elite group (r = -0.51, p = 0.001; r = -0.36, p = 0.031, respectively). No significant differences were observed in the sub-elite group.
Furthermore, based on Chi-square tests between states of wellbeing and independent variables, no significant difference was found in gender, place of living, and Olympic sports compared to Paralympic sports. The performance level, however, showed significant differences in organizational (p = 0.002), financial (p = 0.004), and legal (p = 0.004) dimensions of wellbeing. The elite athlete group indicated not being good at all in organizational wellbeing (p = 0.02) and not so good in social wellbeing (p = 0.01), whereas somewhat good in legal wellbeing (p = 0.01) compared to the sub-elite group. Interestingly, only the sub-elite athlete group showed their uncertainty (i.e., not sure) about their wellbeing in organizational (p = 0.03), social (p = 0.004), financial (p = 0.04), and legal dimensions (p < 0.000). There was no significant difference in the overall satisfaction to life score between the elite and sub-elite athlete groups [p = 0.26 (90% confidence limits -1.47 to 0.27)].
Given that this study was conducted in early 2021, the influence of COVID-19 on their wellbeing was observed. As a result, the COVID-19 pandemic was perceived to have an TABLE 1 The relationship between the overall satisfaction with life and wellbeing scores of the participants in the elite athlete group (represented Japan in the senior competition at the international level) and sub-elite athlete group (competed at the national level). impact on the state of student-athletes' wellbeing to some degree, as approximately half of the participants indicated either being greatly influenced or somewhat influenced in physical (57%), mental (61%), educational (52%), organizational (48%), social (48%), financial (49%), and legal (44%) wellbeing.
---
Overall satisfaction with life
Athletes' perception of the influence and importance of wellbeing for performance
---
Influence on their performance
About half of the participants considered their performance was greatly influenced or somewhat influenced by physical (56%), mental (53%), educational (50%), social (47%), financial (42%), and legal (38%) wellbeing (Table 2). Moreover, significant differences were observed between the elite and sub-elite groups in social, financial, and legal wellbeing [p = 0.03 (90% confidence limits -0.88 to -0.13), p = 0.004 (90% confidence limits -1.03 to -0.29), and p = 0.003 (90% confidence limits -1.15 to -0.44), respectively]. Thus, it was found that the studentathletes in the elite group perceived their state of wellbeing to affect their performance more influence on performance than the athletes in the sub-elite group.
---
Importance of their performance
Many participants considered the dimensions of physical (83%), mental (80%), educational (72%), social (78%), financial (76%), and legal (71%) wellbeing to be very important or somewhat important in relation to improving their own performance (Table 2). No significant difference was observed between the elite and sub-elite athlete groups (p > 0.05), meaning that most Japanese student-athletes consider wellbeing important for their performance development regardless of performance level.
---
Availability of support policy, guidelines, and programs in national sports federations
The results revealed that Japan's support systems and programs were rarely available for student-athletes. First, 11.0% of the participants indicated the availability of guidelines on athlete wellbeing and/or mental health from the national sports federations, whereas 35.0% responded "No," and 54.0% showed "I do not know." Second, only 18.0% revealed that their national sports federation has some kind of policy or implementation of it to support the athlete's wellbeing and/or mental health. In comparison, some national sports federations have policies but no implementation (11.0%). Third, 21% of the participants indicated no policy or actions within the national sports federations, whereas 50.0% did not know the availability.
---
Student-athletes' experience of support for their wellbeing
The results indicated that most of the student-athletes (85.0%) had never received support for their wellbeing. The reasons were identified as (a) a lack of knowledge about how to access those services (49.4%), (b) the lack of information about those services available to them (43.5%), (c) the lack of understanding of the necessity to receive such support (11.8%), and (d) the lack of a service provider from whom they can receive support (10.6%). Interestingly, nine of 15 participants (60.0%) who experienced athlete wellbeing support in the past revealed that they received support from educational institutions (i.e., high schools and universities) rather than national sports federations (n = 2) or the Japanese Olympic and Paralympic Committees (n = 1). The support services the 15 participants received in the past comprised educational programs to gain knowledge and information (46.7%), develop the athletes' skills such as resilience and/or coping (40.0%), and mental healthrelated services (40.0%). Individualized consultation (26.7%), as information delivery and education programs, seemed to be necessary.
---
Information
It was found that only 12.0% of the student-athletes knew the word and the meaning of "athlete wellbeing." In fact, 99.0% of them claimed, in their perception, that the national sports federations had never delivered information about their wellbeing to them. Moreover, 67.0% indicated that they had never obtained and/or gathered information about "athlete wellbeing." For the rest of the participants, the information sources varied from online movie (e.g., YouTube, SNS etc.) (18.0%), national sports federations (12.0%), literature (7.0%), information delivery from entourage (support staffs = 4.0%, coach = 3.0%, teammates = 3.0%, retired athletes = 2.0%), and website of IOC and/or International Sports Federations (IFs) (2.0%).
---
Detection
The results demonstrated the lack of a detection and monitoring system for student-athlete wellbeing. First, 77.0% of the participants responded that they had no experience when national sports federations approached them to understand their state of wellbeing. Despite the relatively low experience of the student-athletes (23.0%), the detailed detection methods utilized by the national sports federations in their approach were also specified as; (a) conversation with the coach and/or experts (11.0%), (b) informal conversation daily (9.0%), (c) utilization of measurement tools (8.0%), (d) individual confirmation from behavior such as continuous absence in training (5.0%), and (e) clinical diagnostic tests (3.0%). Interestingly, however, no participants indicated their experience in utilizing any tool for detection.
---
Help-seeking behavior when faced with a threat or risk
Most participants indicated that they had never witnessed or experienced behavior that could be considered a threat or risk to the student-athlete's wellbeing and/or mental health (84.0%). Among 16 participants who have witnessed or experienced inappropriate behavior, 31.2% of those shared or reported it to someone else, such as teammates or team staff (n = 6), and the national hotline set by the national sports federations, Japanese Olympic Committee, Japanese Paralympic Committee, or JSC (n = 4). The reason why the majority of the student-athletes (68.8%) did not share or report the case was that the athletes; (a) did not want to make it a big deal (45.5%), (b) were afraid of who reported (36.4%), (c) did not know whom to report to (18.2%), or (d) did not want to be involved in (18.2%). Of those who shared or reported it to someone else, however, 60.0% indicated their positive experience by expressing their satisfaction with the correspondence to the issues. Finally, the results demonstrated the lack of information and knowledge about the availability of a hotline, as 75.0% of the participants responded that they had never heard or been aware of the availability of a hotline.
---
Help-seeking behavior when anxious or distressed
The results also showed that 55.0% of the participants had someone whom they could talk to whenever they were anxious or distressed, including parents (61.8%), friends (60.0%), teammates (30.9%), significant others (25.5%), senior athletes (23.6%), brothers and sisters (18.2%), coaches (10.9%), and/or support service staff (5.5%). However, only 19.0% choose to approach experts to seek help. Those experts included psychiatrists (26.3%), clinical psychologists (21.1%), other psychological specialists (e.g., industry and school counselors) (15.8%), sports counselors (15.8%), and so on. Interestingly, 31.6% of those who sought help from experts identified with a coach. Their experience of working with the experts tended to be somewhat positive, as 47.4% indicated their satisfaction, whereas the same number of participants were not sure whether they were satisfied or not. Interestingly, the barriers to not seeking help from experts were identified; (a) lack of knowledge about where they could find the appropriate experts (37.0%), (b) uncertainty about the cost of receiving support (35.6%), (c) disbelief in the ability of experts to solve their problems (30.1%), (d) no clarity about whom to talk to (23.3%), (f) worries about eyes around them (17.8%), and/or (g) a feeling of embarrassment to seek help (12.3%).
Athletes' expectations for the national support system and service programs for their wellbeing If the government and national sports federations were to develop the support system and service programs in Japan, 38.0% of the participants expressed their willingness to receive support, while 31.0% were reluctant to use the service in the future. The majority of the participants, however, agreed with the importance and necessity of the government and national sports federations developing the system and programs to promote and support athlete wellbeing in Japan (Figure 1). Based on the results, "coach education" was the most expected action (77.0%), followed by "develop a guideline" (76.0%), "clear statement on strategic plan or policy of national sports federations" (75.0%), and "set up the system to react when any problem occurs (investigation, measures, and penalties, etc.)" (75.0%). These results might indicate the need for coaches to understand the field of wellbeing while expecting the government and national sports federations to provide guidance. Considering that all items were somewhat equally supported and even the least expected item obtained 66.0%, it could be concluded that various actions could potentially be taken to develop the national support systems and programs in the future.
---
Discussion and practical implications
As there is convincing evidence that indicates pursuing excellence i nhigh-performance sports is associated with various factors that may become threats to the holistic wellbeing of athletes (MacAuley, 2012;Gouttebarge et al., 2019;Giles et al., 2020;Bennie et al., 2021), several high-profile countries in the Olympic and Paralympic Games, such as Canada, Australia, Netherlands, New Zealand, the United Kingdom, the United States, and so on, have started developing their own support systems and programs for athletes to pursue excellence both in performance and wellbeing in recent years. Japan is considered one of the world's leading countries in highperformance sports by placing in the top 3 in the summer Olympic Games of Tokyo 2020. However, little literature is available in the Japanese context (Kinugasa et al., 2021). As an initial investigation, this pilot study aimed to reveal the general trends of athlete wellbeing in Japan, particularly from
FIGURE 1
The participants' expectations for the national support system and service programs on wellbeing on a 5-point scale (1 = strongly agreed, 2 = relatively agreed, 3 = relatively disagree, 4 = strongly disagree, 5 = not sure).
the perspectives of university student-athletes. In the following, the discussion was carried out as per the four specific objectives of this study.
First, this study aimed to investigate the current state of student-athlete wellbeing from a holistic development perspective (Wylleman, 2019). Based on the results, the Japanese university students demonstrated a relatively good state in all seven dimensions of wellbeing (i.e., physical, mental, organizational, social, educational, financial, and legal) despite the observation of COVID-19 influence to a certain degree. In fact, the overall satisfaction with life scores of the participants and the general population in Japan were similar (5.7 and 5.9, respectively) (Cabinet Office, 2018). The lower score of wellbeing in organizational and social wellbeing for the elite group somewhat supported the idea that elite athletes need more support than non-elite athletes as they face higher demands that may threaten their wellbeing. In addition, the results show that only the sub-elite athlete group indicates their uncertainty (i.e., not sure) about their wellbeing in organizational, social, financial, and legal dimensions, suggesting lower awareness of their wellbeing at the non-elite level. Those results implied that elite athletes need more support for their wellbeing. The holistic approach is preferable by providing not only physical and mental but also social and organizational dimensions of their wellbeing.
The second objective of this study was to understand the level of knowledge about athlete wellbeing in university student-athletes. Given the little information available in Japanese, the result showed that the student-athletes were previously not familiar with the word "athlete wellbeing, " and the majority did not exactly know the meaning of it. However, given the description in the written form attached to the survey, approximately half of the student-athletes perceived their performance was significantly influenced or somewhat influenced by physical, mental, educational, social, and financial wellbeing. Moreover, more than 70.0% of the participants considered athlete wellbeing in all dimensions to be very important or somewhat important to improving their performance. These results have implications in two ways. One is that it is essential to raise awareness of athlete wellbeing in Japan so that athletes recognize the importance of self-care for their wellbeing, which, in turn, influences their performance. The other one is that those involved in the field of wellbeing should not take wellbeing apart from performance by understanding that those two are intercorrelated, at least from the perspective of student-athletes. In other words, the support for athlete wellbeing should be designed to align with the performance development plan and progress of the athletes.
The third objective of this study was to reveal the athlete's perception of the availability of a support system within the national sports federation. In regards to the availability of policies, guidelines, and programs, the results suggested that (a) there were only a few national sports federations already accommodating the support policies, programs, and guidelines in their systems, and (b) the information might not be appropriately delivered to athletes despite the availability, or (c) the athletes were not eligible to access the service and information due to their performance level. As only 2% of the participants indicated their experience of receiving support services for their wellbeing from the national sports federation, it could be argued that few national sports federations obtain the support system within the organization supporting point (a) indicated above. Given that the results derived from the athletes' perception, however, further investigation of the national sports federation is necessary to conclude that they have not developed the policy, guidelines, and service programs for their athletes.
The fourth objective was to investigate athletes' experiences of support services from various points of view, including information, detection, and seeking behavior in reacting to a threat and/or risk, as well as a feeling of anxiety and/or distress. Overall, the results proved that most of the university studentathletes had never experienced, at least in their recognition, receiving support services for their wellbeing in the past. In terms of information, 67.0% indicated that they had never obtained and/or gathered information about "athlete wellbeing." Interestingly, however, it was found that the lack of information about the support service available and where to access it was the number one reason cited by the student-athletes, rather than their rejection of the service. Despite the small sample size who obtained the information about wellbeing (33.0%), given the results indicating their behavior to seek information about wellbeing, it could be recommended to consider the use of an online platform such as YouTube and/or social networking sites (18.0%) in addition to the national sports federations (12.0%) and entourage (e.g., coaches, teammates, support staff, former athletes) (12.0%) as the channel for information delivery. Nevertheless, it should be cautious about the accuracy of the information, as only 2.0% indicated their experience of seeking information on the official website of IOC and/or Ifs. In order for student-athletes to systematically access the right information in the Japanese language, a somewhat "one-stop-shop resource center" could be a possible action, while conducting further research in the Japanese context is necessary to provide evidence for policy-makers and practitioners. Kinugasa et al. (2021) stated the definition of athlete wellbeing in Japanese, which could be used in policy and practice in the future. Regarding detecting problems associated with athlete wellbeing, the results showed that 77.0% of them had no experience of receiving this service from national sports federations. As for detection techniques, it was found that communication and/or interaction was more commonly used than the application of measurement tools and/or clinical tests. Furthermore, concerning help-seeking behavior, 84.0% claimed no experience of facing or witnessing inappropriate behavior that could be a threat or risk to the athlete's wellbeing. Among those 16.0% with experience, approximately 70.0% did not share or report it to someone because they did not want to make it a big deal (45.5%) and/or were afraid of who reported it (36.4%). Despite the availability of a hotline for wellbeing in a broader sense, only 4 participants have used it to report the problem. This was probably due to the lack of awareness, as 75.0% of the participants indicated that they had never heard of or been aware of the hotline. It was evident that the studentathletes tended to first report problems to their entourage rather than the official hotline set by organizations to seek help. Finally, the results indicated that 45.0% of the studentathletes did not have anyone to talk to about their anxiety or distress. Within the 55.0% of the student-athletes, it was found that approximately 60.0% of them would initially talk to their parents or friends rather than coaches or support staff. It implied that information and education to athletes and coaches are not enough but include parents and entourage to understand athlete wellbeing better. Additionally, despite the low rate of student-athletes (19.0%), coaches (31.6%), psychiatrists (26.3%), and clinical psychologists (21.1%) were the top three experts from whom student-athletes have sought help in the past, while non-psychology experts such as medical doctors and athletic trainers/physiotherapists (10.5%) were also indicated for their options. These results indicate that it is essential for the organization to consider the development of a network with experts in the fields of mainstream psychology and medicine, as well as the involvement of coaches within the support system in Japan. It should, however, be noted that only 8.0% of the student-athletes indicated their willingness to talk to experts, while 43.0% did not feel a need, and 30.0% could not seek help despite wanting to do so. Interestingly, the main barriers for the student-athletes were a lack of knowledge about where they could find the appropriate experts, their uncertainty and a feeling of incapability about the cost, and their distrust in the ability of experts to solve their problems. According to these results, it could be suggested that to facilitate the change in athletes' help-seeking behavior from experts, information and education, as well as a reference network to access the appropriate experts for their issues, are necessary as the barriers seemed not to be the stigma often associated with athletes.
These findings could then lead to a discussion about the implications associated with this study's last objective, which was to identify the types of support student-athletes expect from the government and/or national sports federations in the future. It was interesting that most student-athletes strongly or relatively agreed to all of the proposed actions, including clear guidance of the direction, information gathering and delivery, athlete and coach education, the development of detection and monitoring tools, the settling of the system to react to problem occurrences, the employment of experts, and the development of a collaborative network system with experts, expert organizations, private companies, government, and national sports federations, and the development of a referral network. These results somewhat supported the argument that to implement policy into practice, increasing awareness and knowledge through information delivery is essential but not sufficient to address athletes' various needs for mental health and wellbeing (Purcell et al., 2019). The development of these support frameworks could be considered the common approach in a national system worldwide (Department for Digital, Culture, Media and Sport, 2018;Moesch et al., 2018;Australian Institute of Sport, 2020;High Performance Sport New Zealand, 2021). As those approaches were somewhat equally agreed upon (Figure 1), however, it was difficult to make the prioritization among those actions in this pilot work. Interestingly, however, more than one in three athletes showed resistance to receiving support services even if the government and/or national sports federations establish those support frameworks in the highperformance sport system. These attitudes might be associated with a lack of knowledge and information, as observed in their experience of receiving support from experts rather than cultural stigma. Therefore, promoting athlete wellbeing is necessary to consider those obstacles when designing and planning the development of policies, systems, and programs to support athlete mental health and/or wellbeing to facilitate its utilization in better ways.
In summary, this pilot study of university studentathlete wellbeing in Japan revealed the general trends in broader and holistic perspectives as little information was available. Based on the results, the current state of studentathletes' wellbeing was relatively positive despite the influence of COVID-19. Given the lack of information related to athlete wellbeing in Japan, the student-athletes demonstrated low recognition of the word and meaning of "athlete wellbeing." They indicated, however, that they perceived their state of wellbeing might influence their performance and, therefore, be important for their performance development. Nevertheless, in the perception of student-athletes, few national sports federations have policies, guidelines, and support programs in place for athletes. It was, therefore, evident that most of the student-athletes had never experienced the support service on wellbeing in terms of information, detection, and help-seeking behavior. Despite the uncertainty of utilizing the support provided, the student-athletes agreed that it was necessary for the government and/or national sports federations to take actions such as clear guidance of the direction, information gathering and delivery, athlete and coach education, the development of detection and monitoring tools, the settlement of the system to react to problem occurrences, employment of experts, and the development of a collaborative network system with experts, expert organizations, private companies, government and national sports federations, and the development of a referral network. Given these results, further investigations were required, particularly targeting athletes in high-performance sports (i.e., Olympic and Paralympic athletes) and national sports federations.
---
Limitations and future direction
There were some limitations associated with this pilot study. First, the COVID-19 pandemic affected the findings as the study was conducted during the State of Emergency in Japan. In fact, approximately half of the participants perceived the influence of the COVID-19 pandemic on their wellbeing. To account for the COVID-19 pandemic, the states of athlete wellbeing in each dimension were asked over the past 3 years. Since this investigation focused on the general trend of student-athletes' perceptions of their state and environment of holistic wellbeing, the instrument consisted of only one set of items specifically capturing the influence of COVID-19. Second, in terms of methodology, the sample size of 100 is limited for subgroup analysis. Therefore, further studies could suggest that an indepth analysis of athlete wellbeing, such as gender, length of time in the field, and status of physical limitations, with larger sample size, might support assuming the generalizability of the study. Finally, as the interval of 7 days for test-retest reliability might not be sufficient, a minimum time gap of a fortnight may be necessary for future investigation.
Based on the findings from this pilot study, further investigation should be carried out to develop the national support system in Japan. First, future studies could target elite athletes (i.e., Olympic and Paralympic athletes) on a larger scale. Second, as the findings were only derived from athletes' perspectives, it could suggest investigating the national sports federation's point of view regarding the availability of athlete support systems and/or programs. Third, the researchers could consider the study about the wellbeing of the entourage because the issues and challenges associated with the topic of wellbeing are not necessarily limited to athletes as they also spend considerable time in a highly demanding environment (Breslin et al., 2017).
Given the lack of information on the Asian population in the field of athlete well-being and mental health was evident (Reardon et al., 2019), international collaborative research in the Asian region is necessary. Furthermore, comparing Asian and Western countries could help in cultural considerations in developing each country's policies, systems, and programs. As the JSC, the mother organization of the JISS, is the only national sports agency responsible for grassroots to high-performance sports in Japan, the social science research group of the JISS will continue to study in this field to provide further evidence and information to support policy implementation in the field of athlete wellbeing by collaborating with researchers both in Asia and the world in the future.
---
Data availability statement
The datasets generated for this study will not be available due to the privacy of the participants. Please contact the corresponding author for any further information and any requests to access the datasets.
---
Ethics statement
The studies involving human participants were reviewed and approved by Japan Institute of Sports Sciences Ethical review Committee. The patients/participants provided their written informed consent to participate in this study.
---
Author contributions
YN conceptualized the study, developed the instrument, supervised the analysis, and drafted the manuscript from initial to final. CK conducted data analysis and drafted parts of the methods, measures, and results. TK supervised the whole process as the project leader, recruited participants, conducted the survey and data analysis, and drafted some of the methods, measures, and results in parts. All authors contributed to the article and approved the submitted version.
---
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
---
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 45,529 | 276 |
af3ff0e3804a18fa932bfb0abb184d80ec6b134e | Community Resilience and Disaster Preparedness: A Social Analysis of Vulnerability and Coping Mechanisms in Indonesian Villages | 2,023 | [
"JournalArticle"
] | This research delves into the complexities of network resilience and catastrophe preparedness in Indonesian villages thru a qualitative case observe method. The analysis exhibits several interconnected subject matters, including socio-monetary disparities, the efficacy of traditional practices, collaborative dynamics between formal and casual networks, the impact of schooling and focus, the function of social capital, gender-responsive techniques, cultural influences, communique challenges, and the empowerment of groups through education and authority's support. These findings, aligned with global frameworks, emphasize the necessity of contextprecise, inclusive, and holistic procedures to address vulnerabilities and enhance resilience. The observe advocates for included techniques that empower communities as lively retailers in constructing resilient societies. | Introduction
The aftermath of failures is not entirely measured in phrases of physical destruction; it's far similarly defined by means of the resilience and preparedness of the affected groups. This observe recognizes the significance of network-level tasks in catastrophe threat discount, aligning with the Sendai Framework for Disaster Risk Reduction 2015-2030, which emphasizes the need of localized movement and network engagement (UNDRR, 2015). Understanding the social dynamics that form vulnerability and resilience is essential for developing focused interventions that resonate with the numerous contexts of Indonesian villages.
Recent studies have emphasized the role of social capital in improving network resilience (Aldrich & Meyer, 2015;Norris et al., 2008). Social capital, encompassing social networks, accept as true with, and shared norms, has been identified as a essential asset in put up-disaster recovery and preparedness (Aldrich, 2012). In the context of Indonesia, wherein communal ties often form the spine of day-by-day life, investigating the impact of social capital on catastrophe preparedness is specifically pertinent.
The rise of weather exchange and its correlation with extended frequency and depth of herbal disasters has brought on a revaluation of current threat reduction strategies (IPCC, 2021). Recognizing this, our research goals to explore the adaptation of traditional practices and indigenous expertise within Indonesian villages as valuable coping mechanisms. These localized techniques, deeply rooted in cultural contexts, can offer particular insights into sustainable catastrophe preparedness.
In precis, this study seeks to bridge the gap among theoretical frameworks and sensible packages by using carrying out a nuanced exam of social elements influencing vulnerability and the coping mechanisms deployed by using Indonesian villages. By doing so, we aspire to provide actionable pointers for policymakers, neighborhood authorities, and humanitarian companies to bolster the resilience of communities facing the ever-gift risk of herbal failures.
The socio-monetary disparities inside Indonesia similarly enlarge the challenges confronted with the aid of inclined communities in the wake of disasters. Recent reports from the National Disaster Mitigation Agency (BNPB) highlight the disproportionate impact of herbal disasters on marginalized agencies, consisting of those with lower income stages and confined get right of entry to to schooling (BNPB, 2023). Addressing those disparities is not most effective a count number of humanitarian situation however also an essential element of constructing sustainable and inclusive disaster resilience.
As we embark on this exploration of community resilience and catastrophe preparedness, it's far critical to recognize the dynamic nature of vulnerability. The effects of screw ups are not uniform across groups, and the potential to manage and get better is fashioned by using a complex interaction of socio-cultural, economic, and environmental elements (Adger, 2006). This study seeks to get to the bottom of those complexities with the aid of adopting a qualitative case take a look at method, allowing for an in-depth know-how of the unique challenges confronted by using special villages across Indonesia.
Findings of this research are predicted to make contributions to the continuing discourse on catastrophe risk discount inside the Asia-Pacific vicinity, aligning with the Hyogo Framework for Action (UNISDR, 2005). By contextualizing the global frameworks within the specific socio-cultural panorama of Indonesian villages, this has a look at aspires to offer nuanced insights that could tell not handiest neighbourhood guidelines however additionally contribute to the wider international know-how of community resilience within the face of natural disasters.
---
Methods
A qualitative case examine technique became hired to delve into the intricacies of network resilience and catastrophe preparedness in Indonesian villages. This technique was deemed suitable for its potential to offer a nuanced understanding of the socio-cultural dynamics shaping vulnerability and the coping mechanisms followed via communities. Two geographically numerous villages were purposefully selected to symbolize different regions within Indonesia, ensuring a comprehensive exploration of studies and responses to natural screw ups.
The records collection procedure encompassed a mixture of in-intensity interviews, attention organization discussions, and an analysis of local government catastrophe management plans. Semi dependent interviews were conducted with key community leaders, local authorities, and people with information in catastrophe control, providing insights into the social dynamics influencing vulnerability and community leaders' perceptions of powerful coping strategies. Separate attention institution discussions were organized within each village, involving a various institution of citizens. These discussions facilitated exploration of network-level perspectives, reports, and the communal techniques employed for disaster preparedness and restoration.
Additionally, existing disaster control plans from the selected villages were analysed to contextualize the formal frameworks in region. This analysis aimed to uncover the interface between community-primarily based tasks and formal disaster management systems. Thematic analysis, following the guidelines proposed by using Braun and Clarke (2006), became hired for information analysis. Transcriptions of interviews and awareness group discussions were systematically coded, and emergent subject matters had been iteratively refined through a process of steady comparison.
The studies crew prioritized ethical issues in the course of the observe. Informed consent become acquired from all members, and measures were taken to make sure the confidentiality and anonymity in their response. This qualitative methodology furnished a basis for a holistic exploration of the social dimensions of vulnerability and resilience, presenting valuable insights into the coping mechanisms that emerged inside the selected Indonesian villages.
---
Results and Discussion
---
Social Factors Influencing Vulnerability
The evaluation found out a prominent theme associated with socio-financial popularity and its impact on vulnerability. In Village A, in which monetary disparities were more mentioned, network contributors expressed concerns about restricted assets for disaster preparedness. A resident remarked, "Many households here warfare to make ends meet, so making an investment in catastrophe kits or evacuation plans regularly takes a back seat." This sentiment become echoed throughout more than one interviews, highlighting the position of sociofinancial elements in shaping vulnerability.
---
Community-Based Coping Mechanisms
A routine subject that emerged from the records turned into the reliance on traditional practices as coping mechanisms. In Village B, residents emphasized the efficacy of network-organized drills primarily based on indigenous expertise. One participant shared, "Our ancestors exceeded down methods of predicting floods. We prepare drills to make certain each person is aware of what to do when those symptoms seem." This network-driven technique showcased the adaptive nature of conventional practices in improving disaster resilience.
---
Interplay among Formal and Informal Networks
The analysis underscored the elaborate relationship among formal and casual catastrophe control systems. Local authorities plans in each villages mentioned precise roles for community participation. A network chief in Village A emphasised, "We paintings closely with the local authorities. They offer sources, and we enforce techniques that fit our community." This collaborative technique highlighted the significance of integrating formal and casual networks for effective disaster preparedness.
---
Educational Initiatives and Awareness
Educational projects emerged as a crucial theme influencing vulnerability. In Village C, wherein educational stages have been comparatively higher, residents demonstrated greater cognizance of catastrophe risks and preparedness measures. An interviewee stated, "Our colleges frequently conduct drills, and children are taught approximately the local geography and capacity dangers." This subject emphasised the role of training in fostering a proactive approach to catastrophe preparedness.
---
Social Capital and Trust
The qualitative records highlighted the importance of social capital and consider in community resilience. In all villages, close-knit social networks performed a pivotal position in sharing records and coordinating efforts at some stage in failures. In summary, the outcomes reveal a complicated interplay of social factors shaping vulnerability and a diverse range of coping mechanisms inside Indonesian villages. The findings emphasize the significance of context-specific techniques that integrate neighborhood practices, leverage social capital, and bridge the distance among formal and casual disaster control systems.
---
Gender Dynamics in Disaster Preparedness
A nuanced topic that emerged turned into the function of gender in catastrophe preparedness. In Village C, ladies often took the lead in organizing and participating in network drills. A woman resident cited, "Women are generally the ones at home for the duration of the day. We make certain our households are conscious and organized for any emergency." This topic highlighted the particular contributions of women in fostering network resilience and challenged conventional gender roles in catastrophe control.
---
Impact of Cultural Beliefs on Coping Strategies
Cultural beliefs appreciably stimulated coping strategies in Village B, where a robust connection to nature and religious practices prevailed. Residents shared testimonies of looking for steerage from nearby religious leaders for the duration of instances of heightened catastrophe risk. "Our beliefs are deeply tied to the land. Before making any decisions, we consult with our spiritual leaders to interpret signs from nature," explained a community member. This subject emphasized the need to understand and combine cultural views into catastrophe preparedness initiatives.
---
Challenges in Communication and Information Dissemination
Challenges in communication emerged as a crucial subject matter impacting vulnerability. In Village A, wherein communication infrastructure become confined, residents faced difficulties in receiving well timed facts about drawing close screw ups. "We rely upon word of mouth, and sometimes the message does not reach everyone in time," expressed a player. This theme highlighted the need for progressed verbal exchange strategies, specifically in areas with constrained technological assets.
---
Adaptive Capacity via Community Training
The evaluation diagnosed network schooling applications as a key element in improving adaptive ability. In Village C, a proactive community-driven education initiative was credited with empowering residents to reply correctly to screw ups. "We have ordinary training classes on first useful resource, evacuation methods, or even fundamental seek and rescue abilties," shared a participant. This subject matter emphasised the positive impact of ongoing training programs in constructing the adaptive ability of communities.
---
Government Support and Infrastructure
The degree of presidency support and infrastructure emerged as a full-size theme influencing network resilience. In Village B, in which authorities initiatives were extra stated, residents expressed a sense of security derived from nicely-maintained evacuation routes and targeted shelters. "The authorities has invested in infrastructure that makes us feel more secure in the course of screw ups," said a community chief. This subject underscored the significance of governmental contributions in bolstering community resilience efforts.
In end, the qualitative evaluation illuminated a diverse variety of topics, each contributing to the tricky tapestry of community resilience and disaster preparedness in Indonesian villages. These findings provide valuable insights for policymakers, nearby authorities, and humanitarian businesses to tailor interventions that address the specific needs and dynamics of groups going through the consistent threat of natural disasters. Social Factors Influencing Vulnerability: The identity of socio-economic disparities as a sizable subject matter aligns with broader global discussions on the disproportionate impact of failures on marginalized groups. Numerous studies, including Cutter et al. (2016), emphasize the link between socio-economic reputation and vulnerability, emphasizing that economically disadvantaged populations often face better risks and slower recuperation. In the context of Indonesia, this underscores the urgent want for targeted interventions that deal with socioeconomic disparities and make certain that susceptible communities are not left disproportionately pressured via the outcomes of failures. The World Bank's emphasis on inclusive regulations and social safety packages becomes especially applicable in mild of our findings, highlighting the significance of complete techniques that address underlying sociofinancial elements.
The socio-monetary theme prompts mirrored image on the interconnectedness of catastrophe risk reduction and broader development desires. The Sendai Framework advocates for the incorporation of disaster threat reduction into improvement making plans, emphasizing the want to construct resilient groups thru inclusive and sustainable improvement (UNDRR, 2015). Our findings underscore the importance of now not most effective addressing instant vulnerabilities but additionally tackling systemic issues related to poverty, get entry to to training, and employment possibilities to beautify long-time period resilience.
Community-Based Coping Mechanisms: The emergence of conventional practices as a coping mechanism aligns with worldwide popularity of the importance of indigenous understanding in catastrophe chance discount. The Sendai Framework recognizes the ability of conventional understanding and practices in enhancing resilience and calls for the incorporation of such wisdom into national techniques (UNDRR, 2015). The validated efficacy of network-organized drills based totally on indigenous information in Indonesian villages reinforces the idea that community-driven, culturally rooted procedures can play a pivotal role in building resilience. This highlights the need of preserving and integrating traditional practices into formal disaster management plans.
Moreover, the topic of network-based coping mechanisms requires a revaluation of the dichotomy between "traditional" and "cutting-edge" procedures to disaster resilience. Integrating traditional practices into contemporary catastrophe hazard reduction techniques not most effective respects cultural heritage but additionally leverages the strengths of nearby groups. As discussions around the sector emphasize the significance of context-particular techniques, our findings emphasize the capacity synergies between age-antique practices and current procedures to create resilient communities capable of withstanding the evolving challenges of climate-related failures.
Interplay among Formal and Informal Networks: The collaborative method between neighbourhood groups and formal disaster management systems aligns with global efforts emphasizing the significance of community engagement in disaster chance reduction. The International Federation of Red Cross and Red Crescent Societies (IFRC) highlights the want for community-led tasks and partnerships with formal systems to enhance resilience (IFRC, 2018). Our findings give a boost to the Hyogo Framework's principle of integrating community-based totally tasks into formal structures for powerful disaster resilience (UNISDR, 2005). This interplay between formal and informal networks emphasizes the significance of bendy frameworks that understand the strengths of each nearby and formalized strategies.
The discussion at the interaction among formal and casual networks prompts concerns of power dynamics and inclusivity. It is essential to ensure that community voices are not simplest heard but additionally actively integrated into decision-making strategies. Recognizing the particular knowledge and strengths that communities convey to the desk is critical for the fulfillment of collaborative projects. As the global discourse on resilience shifts in the direction of participatory techniques, our findings underscore the significance of fostering actual partnerships that empower neighborhood communities as active dealers in catastrophe hazard reduction.
Educational Initiatives and Awareness: The association between schooling and disaster cognizance aligns with international efforts to prioritize training as a essential factor of disaster chance reduction. UNESCO acknowledges the position of training in building a subculture of protection and resilience, selling know-how dissemination, and fostering knowledgeable choice-making within the face of failures (UNESCO, 2019). Our findings strengthen the idea that knowledgeable groups are better prepared to reply to and get over screw ups, emphasizing the want for complete educational projects that extend beyond formal school settings.
Education and attention activate a mirrored image at the role of expertise dissemination in selling a subculture of preparedness. The International Federation of Red Cross and Red Crescent Societies (IFRC) advocates for community-primarily based education applications that empower individuals to take possession in their protection (IFRC, 2017). In the Indonesian context, our findings underscore the need for targeted initiatives to enhance focus and preparedness, especially in areas with decrease instructional access. This highlights the interconnectedness of schooling, network resilience, and sustainable improvement, reinforcing the significance of fostering a lifestyle of continuous studying and preparedness at all levels.
Social Capital and Trust: The subject of social capital and consider resonates with global recognition of the position of social networks in improving resilience. Aldrich and Meyer (2015) emphasize the significance of social capital in put up-disaster restoration, highlighting how strong social bonds contribute to network resilience. The United Nations Office for Disaster Risk Reduction (UNDRR) recognizes the significance of social concord in withstanding and convalescing from failures (UNDRR, 2017). Our findings verify the intangible but vital position of social bonds in improving catastrophe resilience, calling for interventions that strengthen community ties to foster collective resilience.
The discussion on social capital prompts concerns of social fairness and inclusivity. It is vital to well-known and cope with existing social inequalities that could impact the distribution of social capital within groups. Vulnerable businesses may additionally face additional limitations in having access to and benefiting from social networks, potentially exacerbating present disparities all through screw ups. As worldwide discussions increasingly emphasize the significance of leaving no person at the back of in disaster danger reduction efforts, our findings underscore the want for strategies that sell social inclusion and same get entry to to social capital, ensuring that the advantages of strong network ties reach all members.
Gender Dynamics in Disaster Preparedness: The exploration of gender dynamics in disaster preparedness aligns with global requires gender-responsive methods in catastrophe risk discount. The Sendai Framework emphasizes the importance of gender equality in building resilience and highlights the particular vulnerabilities and strengths of different genders (UNDRR, 2015). The International Federation of Red Cross and Red Crescent Societies (IFRC) advocates for gender-touchy strategies that recognize and cope with the distinct needs of girls, men, women, and boys in disaster threat reduction (IFRC, 2016). Our findings underscore the significance of spotting and empowering the numerous contributions of girls in fostering community resilience, tough traditional gender roles.
The subject of gender dynamics prompts concerns of intersectionality and the interplay between gender and other social factors. Vulnerable corporations, which includes women with lower socio-monetary popularity, can also face compounded demanding situations in catastrophe situations. Recognizing the intersectionality of vulnerabilities is vital for developing inclusive strategies that cope with the diverse needs of all community participants. As the worldwide discourse on gender equality in catastrophe hazard discount evolves, our findings spotlight the want for intersectional approaches that recollect the complex interaction of gender, socio-financial elements, and cultural dynamics in shaping resilience.
---
Impact of Cultural Beliefs on Coping Strategies:
The have an effect on of cultural beliefs on coping strategies aligns with broader discussions at the cultural dimensions of disaster danger reduction. The Centre for Research at the Epidemiology of Disasters (CRED) recognizes the significance of cultural heritage in shaping resilience strategies and calls for the maintenance of cultural practices in the face of changing threat landscapes (CRED, 2019). The Sendai Framework underscores the price of cultural diversity in enhancing resilience and advocates for strategies that admire and integrate nearby ideals (UNDRR, 2015). Our findings emphasize the want for culturally sensitive processes that apprehend and honor local beliefs in disaster preparedness initiatives, reinforcing the idea that cultural background is a precious asset in constructing resilient communities.
The discussion on the effect of cultural ideals prompts concerns of cultural upkeep and the capability anxiety between modernization and traditional practices. As groups evolve and face growing publicity to global impacts, preserving cultural heritage will become essential for keeping resilience. Striking a balance among integrating conventional practices and adapting to trendy hazard landscapes is crucial. Our findings underscore the significance of recognizing and valuing cultural diversity as an indispensable factor of network resilience. This aligns with worldwide efforts to develop strategies that honor cultural identities even as concurrently addressing cutting-edge challenges in disaster danger reduction.
Challenges in Communication and Information Dissemination: The identified challenges in communication resonate with international concerns approximately the virtual divide and information get entry to in catastrophe-inclined regions. The International Telecommunication Union (ITU) highlights the significance of enhancing verbal exchange infrastructure and ensuring equitable get entry to to information in disaster threat reduction (ITU, 2020). Our findings align with international calls for overcoming boundaries in facts dissemination to decorate early caution structures and community reaction techniques. Addressing those challenges is vital for constructing powerful conversation networks that reach all network contributors, regardless of their geographical area or technological assets.
The dialogue on communique challenges activates concerns of inclusivity and the need for various verbal exchange channels. Recognizing that different segments of the populace can also have varied get admission to to conversation platforms is vital for growing inclusive techniques. Leveraging a aggregate of current technologies and conventional communique methods can make certain that data reaches a much wider audience. As the global discourse on conversation in catastrophe danger reduction advances, our findings underscore the significance of tailor-made processes that keep in mind the specific characteristics of every network and prioritize inclusivity in records dissemination. education applications calls for sustained efforts to ensure that groups are prepared with the expertise and abilties vital to respond effectively to disasters.
The dialogue on adaptive ability via network training activates considerations of neighbourhood empowerment and the position of groups as lively retailers of their resilience. Empowering groups to take ownership of their protection and nicely-being is essential for building sustainable resilience. As international discussions an increasing number of cognizance on the shift from a response-oriented approach to a proactive and preparednessfocused strategy, our findings underscore the importance of fostering a lifestyle of continuous studying and talent development within communities. This aligns with worldwide efforts to promote community-driven tasks that beautify nearby adaptive capacity and make contributions to typical disaster resilience.
Government Support and Infrastructure: The topic of presidency assist and infrastructure echoes global discussions at the function of governance in catastrophe resilience. Cutter et al. (2016) emphasizes the significance of governance and institutions in lowering disaster hazard, highlighting the need for effective regulations and infrastructure. The Sendai Framework recognizes the important position of governance in constructing resilience and requires the mixing of disaster threat reduction into country wide improvement rules (UNDRR, 2015). Our findings affirm the superb perceptions of government tasks and underscore the significance of persevered investments in infrastructure and policy frameworks that bolster network resilience.
The dialogue on government assists and infrastructure activates considerations of responsibility and the want for transparent and inclusive governance. Ensuring that authorities tasks are aware of the desires of local groups and are applied in a obvious way is critical for constructing accept as true with. As the global discourse on governance in disaster chance reduction advances, our findings highlight the significance of fostering collaborative partnerships among governments and communities. This collaborative approach guarantees that rules and infrastructure investments align with the particular characteristics of every network, contributing to the improvement of resilient societies.
---
Conclusion
The multifaceted exploration of community resilience and disaster preparedness in Indonesian villages has illuminated essential insights into the elaborate interplay of social elements, coping mechanisms, and governance systems. The identified topics underscore the general significance of addressing socio-economic disparities, integrating traditional practices, fostering collaborative tactics between formal and informal networks, prioritizing education and cognizance, recognizing the position of social capital, selling gender-responsive strategies, honoring cultural ideals, overcoming conversation demanding situations, and empowering communities via education and authorities' aid. These findings contribute to the worldwide discourse on catastrophe danger reduction, emphasizing the need for context-precise, inclusive, and holistic tactics that empower communities as lively dealers in constructing resilient societies. Recognizing the interconnectedness of these issues, our examine advocates for included techniques that acknowledge the diverse dynamics inside Indonesian villages, in the end paving the manner for extra effective, sustainable, and community-centered catastrophe resilience tasks. | 27,911 | 873 |
5fd8f0661e0e1267282e34fee61d6afbe96e8c90 | Hybridizing research and decision-making as a path toward sustainability in marine spaces | 2,023 | [
"JournalArticle"
] | Projecting the combined effect of management options and the evolving climate is necessary to inform shared sustainable futures for marine activities and biodiversity. However, engaging multisectoral stakeholders in biodiversity-use scenario analysis remains a challenge. Using a French Mediterranean marine protected area (MPA) as a marine social-ecological case study, we coupled codesigned visioning narratives at horizon 2050 with an ecosystem-based model. Our analysis revealed a mismatch between the stated vision endpoints at 2050 and the model prediction narrative objectives. However, the discussions that arose from the approach opened the way for previously unidentified transformative pathways. Hybridizing research and decision-making with iterative collaborative modeling frameworks can enhance adaptive management policies, leveraging pathways toward sustainability. | INTRODUCTION
While substantially contributing to human wellbeing, the ocean is increasingly threatened by local human action and climate change 1 . Marine protected areas (MPAs) are advocated as a key strategy for simultaneously protecting biodiversity and supporting coastal livelihoods 2,3 . They are now part of the United Nations Convention on Biological Diversity and Sustainable Development Goals. Their level of protection encompasses fully protected areas where all activities are prohibited to a range of "partially protected MPA" that allow activities to different degrees 4,5 . The former are known to deliver ecological benefits through exclusion of human activities [6][7][8] , whereas the latter assume that conservation will be achieved through cooperation in the social space that leads to sustainable use 9 .
While scientific evidence shows that most benefits, including biodiversity conservation, food provisioning and carbon storage, stem from fully or highly protected areas, most established MPAs are of lower protection levels because of lobbying from current users and political bias towards creating many, rather than highly protected areas [6][7][8]10 . Also, it has been argued that excluding people who are dependent on those areas for their livelihood might not be socially equitable 11 , and that cultural and historical assessments should be part of MPA design. Potential benefits and beneficiaries must also be highlighted and understood at a local level to discuss trade-offs and address the ecological, social and economic requirements of sustainability 9 .
However, guiding principles are lacking on how to manage trade-offs in specific social-ecological systems (SES) 12 . Indeed, while conceptual models of SES have been elaborated to characterize human-nature interactions and inform decisionmaking [13][14][15][16][17][18][19] , and previous works have been developed [20][21][22] , effective science-policy interfaces in marine environments are scant 8 . There is, therefore, room for more effective and inclusive science-policy frameworks, including dedicated modeling approaches. Each step of collaborative prospective modeling from elaborating narratives to interpreting simulation results, including model conception, may help explore the ecological, social and economic consequences of management alternatives at a local level and in the context of ongoing climate change.
For decision-makers, there is a growing awareness that integrating valuable scientific knowledge and stakeholders during the management process can offer better outcomes [23][24][25][26][27][28][29][30] and is less likely to result in resources' collapse 31,32 . However, such integration raises three main challenges for science. First, how to collaboratively develop narratives that break with the usual approach based on ongoing trends-which has failed to mobilize transformative change 33 -by including stakeholders and scientists from a diversity of disciplines. Second, how to shift from resources toward ecosystem-based management, and addressing interactions among scales within SES 34 by using ecosystem-based modeling. Third, how to better align the modeling practice and illustration of trade-offs with the decision-making process, ultimately setting management rules 23 by fitting the modeling on MPA management plans.
In this paper, we argue that bridging the gap between what the literature recommends and what is done on the field requires an innovative science-policy framework that identifies potential benefits, tackles necessary trade-offs and promotes collective deliberation on management measures and rules. To test this hypothesis, we hybridized research and decision-making through collaborative prospective modeling in the case of a French Mediterranean MPA (the Natural Marine Park of the Gulf of Lions), in the context of climate change. Climate change impacts on the ocean (e.g., sea level rise, temperature increase, pH decrease, and to a lesser extent, moisture decrease) are expected to alter marine ecosystems functioning 35 . In the semi-closed Mediterranean Sea, climate change effects on ecosystems are already visible, with most noteworthy impacts reported being oligotrophication and diversity composition change 36,37 . Hence, scientists, policymakers and stakeholders involved in the management of such MPA were involved in the present transdisciplinary and multi-actors' research. We followed a three-step process (Fig. 1) over a threeyear period (2015-2019), which entailed: (i) conducting three workshops in stakeholders' groups (see Supplementary Material Note 1); (ii) developing a social-ecological model through an agent-based modeling; (iii) collectively exploring the simulations' results. The study adds novelty from previous work [13][14][15][16][17][18][19][20][21][22] by combining participatory narrative-building with modeling to shape a deliberation tool in the marine environment. Although an economic analysis would be necessary to identify potential benefits and beneficiaries of different scenarios, such an analysis was not developed as it was beyond the scope of our study. Here, we describe how aiming for sustainability requires a framework for continued work that allows us to (i) build contrasting narratives for the future addressing biodiversity conservation, food provisioning and economic activity in the context of climate change; (ii) explore resulting strategies with a science-based SES model illustrating trade-offs; (iii) deliberate about results in order to adjust strategies.
---
RESULTS
---
Building disruptive narratives to open the range of possible futures
Recent scientific works suggest that we need to move beyond classical scientific studies depicting future trajectories of decline Fig. 1 Key steps of the framework proposed. A three-step process over a three-year period that consists in conducting workshops in stakeholders' group for building disruptive narratives (step 1), developing a social-ecological model through agent-based modeling for implementing narratives translated into scenarios (step 2) and collectively exploring the simulations results eventually leading to modifying scenarios hypothesis and re-shaping scenarios (step 3). A pre-requisite to the three-step process consists in agreeing collectively on main issues to be addressed.
that have failed to mobilize transformative change 23 . Exploring different futures through narrative scenarios proves to be helpful to address MPA management issues in a constructive manner 36 . Lubchenco and Gaines notably emphasize how narratives help in framing our thinking and action 38 . Indeed, as in mythology or literature, narratives act as a reference framework to which one can refer to make decisions adapted to unpredicted but pictured contexts. In the present context, the challenge was to extend or amend our reference scheme by imagining transformative futures.
Here, we did so by inviting scientists, stakeholders, and decision-makers to participate in three workshops led by a specialist in building prospective scenarios (see Methods). Each time, participants were split into three groups to progressively write a narrative about the Natural Marine Park of the Gulf of Lions by 2050 (see Supplementary Notes 1-2). It led to the writing of three original and transformative narratives (Table 1). 2050 was considered close enough to fit with the real political deadline, i.e., the completion of two management plans, and far enough to deal with some expected effects of climate change, such as the decline of primary production in marine ecosystems.
---
Ecosystem-based modeling to address SES complexity
Sustainably managing the ocean requires MPA managers to adopt integrated ecosystem-based management (EBM) approaches that consider the entire ecosystem, including humans (Fig. 2). While fishing affects target species, marine food webs and habitats (depending on fishing and anchoring gear), climate change is expected to influence the dynamics of all marine organisms in terms of growth and spatial distribution (including primary production). EBM focuses on maintaining a healthy, productive, and resilient ecosystem so it provides the functions humans want and need. It requires a transdisciplinary approach that encompasses both the natural dimension of ecosystems and the social aspects of drivers, impacts and regulation 39 .
Whether "end-to-end models" are recommended by marine scientists to study the combined effects of fishing and climate change on marine ecosystems, using one of these tools was beyond the scope of the project (see Methods, Overview of end-toend models). We therefore looked for alternative approaches and built on knowledge and data from the park management plan and on past research conducted on the area: ecosystem-based quality indexes (EBQI) describing the functioning of specific ecosystems and mass-balance models analyzing the overall ecosystem structure and fishing impacts (Ecopath with Ecosim) (see Methods, Ecosystems description). We mapped four major park habitats (see Fig. 3): "sandy & mud" (31 species), "rock" (18 species), "posidonia" (17 species), and "coralligenous" (15 species). Here, (group of) species are represented in aggregate form (biomass density) and linked together with diet ratios (see Supplementary Tables 1234).
This ecosystem-based representation is at the core of our modeling exercise. To simulate ecosystem dynamics, we used the ecosystem food webs as transmission chains for the type of controlling factors described in the narratives: 40 bottom-up control (climate, management) and top-down control (fisheries, management). For each (group of) species, biomass variation results from the equal combination of two potential drivers on a yearly basis: the abundance of prey (bottom-up control, positive feedback) and the abundance of predators (top-down control, negative feedback) (see Methods, Food-web modeling). To link this food-web modeling with the driving factors described in the narratives, we adopted an agent-based modeling framework. Agent-based models (ABMs) are already used for SES applications and science-policy dialog (see Methods, Rationale for ABM). We then developed a spatially explicit model for the main dimensions of the MPA described in the narratives. To set up agents and their environments, we used data from the ecosystem-based representation and geographic information systems (GIS) layers provided by the MPA team. To model space, we used a regular grid, the size of each cell being related to the average size of an artificial reef village (0,25 km 2 ). In accordance with our prospective horizon, simulations were run by 2050 with an annual time step.
The food-web model is located at the cell level with the previous year's outputs as input data for each new year. Other human and non-human agents are also represented at the cell level. At this stage, we modeled temporal dynamics but lacked important spatial dynamics, such as adaptive behaviors of human and nonhuman agents relocating their activities as a result of management measures. For now, interactions between agents are mostly made of spatial-temporal co-occurrence with restricted mobility.
Despite this, we were able to simulate the variation in any group of species in terms of biomass density in the case of a change in primary production, fishing effort, artificial reef planning or reintroduction of species. To disentangle the efficacy of the MPA's management measures from climate change impacts, we ran each scenario with and without climate change (see Fig. 4). Indeed, the variation in primary production is the only difference among scenarios that does not depend on management choices at the MPA level. We could capture some of their propagation and final effects on indicators similar to those of the park management plan and the ecosystem function and natural resources targeted by the narratives: total biomass, harvested biomass, and diving sites access (see Methods, Modeling of drivers and indicators of ecosystem status). For now, all indicators are expressed in biomass quantity and number/share of accessible diving sites (physical units), not in economic value (monetary units). This would require an accurate economic analysis, which is to be developed in a future experiment.
---
Informing management choices based on simulation results
No scenario perfectly reached the objectives it was designed for (Fig. 4). However, they all draw interesting perspectives, such as the occurrence of unexpected co-benefits. In effect, the developed framework allows us to look at the building blocks of the scenarios and the combination of variables to explain the obtained results, as well as proposing explanations and suggesting new hypotheses for enhancing the efficacy of each scenario. Table 2 summarizes the major assumptions of the three scenarios developed by the project team based on the narratives. Scenario 1, "Enhancing total biomass", aimed at increasing biodiversity. Simulation results showed that undersea biomass varied little (-0.11%) despite the primary production decrease under climate change (see Supplementary Table 5). However, the trophic chain structure changed with a large increase in important species to local fisheries (see biomass variation of each group in Supplementary Table 6-10). For example, mackerel, whiting, hake, tuna, octopuses, and soles notably increased in muddy and sandy ecosystems; octopuses, seabass, echinoderms, bivalves, and gastropods in the coralligenous ecosystem; echinoderms, octopuses, and conger in the rocky ecosystem; suprabenthos, echinoderms, octopuses, conger, and scorpion fish in the Posidonia ecosystem. The increase in the above listed species is balanced due to the double prey/predator constraint by a decrease in the biomass of other existing species: benthic invertebrates and fish feeding on benthic crustaceans in muddy and sandy ecosystems; benthic macrophytes, scorpion fish, suprabenthos, and lobsters in the coralligenous ecosystem; suprabenthos, salema, seabass, and scorpion fish in the rocky ecosystem; and, worse, Posidonia itself, salema, and crabs in the Posidonia ecosystem. Simulation results also showed that fished biomass drops by 36%, which is consistent with the high share of fully protected areas (FPAs) in the absence of spatial dynamics and fishing effort relocation. Also, most diving sites that are currently appealing will no longer be accessible (-98%), which is expected to Table 1. Co-designed visioning narratives for the Natural Marine Park of the Gulf of Lions by 2050 built at the experts' workshops.
Narrative 1: Protecting the ecological heritage and strengthening the marine food web Starting point: it reports the progressive deficiency of top predators and keystone species (e.g., groupers, sharks) and its corollary: the impoverishment of the whole trophic chain 74 . But this scenario considers the uncertainties surrounding the idea of good ecological status 75 and shifting baselines [75][76][77] . Hence, specifying an ideal ecological state to achieve didn't make so much sense for the participants, who focused on preserving key habitats, keystone species, and enhancing the actual food chain 78 . This strategy was inspired by the ecological concept: the more diversity there is, the greater the resilience of the system 79,80 Management rules: the participants imagined extending full protection up to 30% of the MPA. This ratio was chosen to echo the most ambitious existing target worldwide: the International Union for the Conservation of Nature recommendation that at least 30% of the entire ocean should benefit from strong protection. Participants also proposed stabilizing fishing effort and reintroducing top predators like groupers in the suitable habitats. Starting point: it is a strong awareness among the members of the group of the climate change expected consequences on marine primary production, being the first level of the food chain: 81,82 less nutrient availability for plankton development, through a limitation of river inflows and a reduction of coastal upwelling. Coupled with the actual decrease of nutrients flows due to dams on rivers and partial closure of estuaries, this would cause a decline of primary production, then affect the upper compartments of the ecosystems, including fished species. In order to avoid this global decline and to maintain the biomass of commercial species, stakeholders proposed actions to be taken on land, that are likely to restore good nutrient availability for plankton development and so on*. To create new sources of income, they suggested aquaculture could be developed in the lagoons in the form of multi-trophic farms (fish/oyster/algae or shrimp/oyster/algae). They also got inspired by "slow food" movements and invented a "slow fishing" style, in the sense that fishing should respect life cycles of different species and marine habitats, in terms of harvesting gears and anchoring systems. It would still be profitable enough for fishermen because the products would be eco-labeled and valued as such.
---
Climate change Decline of primary production in marine ecosystems
---
Fisheries
Management rules: only this narrative allows increasing the fishing effort while artificial reefs for productive purpose are favored and commercial species are reintroduced. The share of fully protected areas is kept to current level (2% of the MPA). Climate change leads to a decline of primary production in marine ecosystems, that would be counterbalanced by a spatial development improving the circulations between lagoons, rivers and sea. *Management measures to be taken upstream: permaculture-type farming would improve soil quality; thus, the water runoff would supply rivers with good nutrients that would be transported to the sea and enhance plankton development. To ensure the good quality of water and nutrients, monitoring at the lagoon level should be performed. To avoid any eutrophication phenomenon, nutrients should not be blocked near the coast by facilities, so the channels of the lagoon should be left open and the undeveloped river mouths should be kept free. Aquaculture in the lagoons would also limit this risk. Starting point: it lies in the climate change expected consequences on the coastline and the consideration of a possible radical transformation in coastal livelihoods due to the loss of biomass of the sea induced by a primary production decrease 82 . Even if the sea level rise consequences exceeded our time frame, participants considered it as a major driver of change. They presumed management would fail to prevent sea level rise and decided to put their efforts in making the best of the new resulting land/sea-scape. They invented a new economic model for the park area, valuing marine underwater seascapes, eco-friendly tourism around artificial reefs and wind turbines, or even an underwater museum around aesthetical artificial reef.
---
Climate change Decline of primary production in marine ecosystems counterbalanced by permaculture-type farming and improved circulation between lagoons, rivers and sea
---
Fisheries
Management rules: participants assumed a commercial wind farm would be created allowing for a multifunctional exploitation of the water column, including educational sea trips. Artificial reefs villages would be densified to create a relief zone for the rocky coast diving sites. These reefs would have a cultural function, like an underwater museum. Their design would rely on ecological and aesthetical requirements. An intermediary target for fully protected areas was set after Member States Parties to the Convention on Biological Diversity (CBD) agreement to cover 10% of their coastal and marine areas with MPAs by 2020 (CBD Aïchi target 11). support habitat and species biomass regeneration but would mark the end of an attractive activity.
---
Climate change Decline of primary production in marine ecosystems
---
Fisheries
Hence, scenario 1 proposed an extension of FPA up to 30% and localized it on the richest areas in terms of biodiversity, which leads to a sharp drop in the potential fished biomass indicator. While this strong protection may not be sufficient to trigger system recovery as a whole, it greatly changes the trophic chain structure, improving the biomass of some very important targeted fishing species (see Supplementary Table 6-10). This improvement could be seen as a co-benefit aligned with the analysis by Sala et al. 10 . It opens avenues to move forward in searching for "win-win" strategies and opens a perspective of co-benefits for local fisheries in case spillovers occur and adequate fisheries management rules are to be defined.
Moreover, if coupled with the same kind of measures that allow us to cancel the negative effect of climate change on primary production (as in scenario 2), scenario 1 would exhibit the best results in terms of total and living biomass variation, although these two indicators are insufficient to assess the quality of the ecosystem. Two hypotheses could be further tested: (i) the time horizon may not be sufficient, and/or (ii) the intensity of the reintroduction of grouper as a keystone species is insufficient given its low reproduction rate and longevity. Nevertheless, it would be interesting to review this scenario searching co-benefits strategies. A new version of the model could test pairing spatial use rights and different levels of protection within strategic zoning and a connected MPA network. It could also consider the spillover of marine organisms and the relocation of human activities due to FPA. In this case, it would be important to determine if the spillover of marine species would be enough so that the relocation of the fishing effort would not significantly affect ecosystem functioning of unprotected areas. In a timely manner, additional measures regulating the fishing effort from a strategic planning/ zoning perspective should complement the framework. Scenario 2, "Enhancing harvested biomass", aimed at increasing food provisioning. Simulation results showed that total fished biomass increases by 2% with or without considering climate change impacts on primary production, which matches the guideline of the narrative. However, fished biomass increases only in the muddy habitat, by >3%, while it decreases by between -3 and -32% in the other habitats, as a result of the counterbalancing effect of keeping the 2% share of FPA. Interestingly, the total biomass in the rocky habitat decreases less (with climate change) or even increases (without climate change) in scenario 2 compared to scenario 1.
At the same time, while living biomass seems stable when climate change is not included (-0.03%), it will decrease with primary production (-0.89%) in contrast with scenario 1. Indeed, when compared to scenario 1, few species showed significant downward variation, except crabs in the Posidonia ecosystem. Also, even with the smallest FPA's share, currently appealing diving sites are reduced by 63%, which confirms that most existing diving spots are concentrated in areas of high natural value in or around the existing MPA.
Scenario 2 favors fishing by increasing fishing effort (5%) and limiting FPA (2%). It also supports fishing with the reintroduction of target species and the densification of the species' habitats. This scenario notably avoids the negative effects of climate change on primary production due to ecological measures taken at the watershed level. However, comparative simulation results illustrate that marine park management measures alone would not generate such an effect. In view of Fig. 2 The social-ecological system (SES) of the Gulf of Lions marine protected area. Representing the Natural Marine Park of the Gulf of Lions as a social-ecological system outlining the main interactions on the territory to be addressed when talking about managing economic activities and environment protection. This representation has been issued through conducting a workshop held around a chronological matrix summarizing the mean features of the territory.
the results, the fishing effort may have been increased too early, thereby canceling out the efforts made elsewhere. Moreover, catches might have been higher if the model had considered a shift of fishing activities from FPA to areas where fishing is allowed. Here, FPAs are located on rocky, Posidonia, and coralligenous habitats, which are areas of greatest natural value (GIS layer). Even if the share of FPA is the lowest in this scenario, almost all of the rocky habitat (excluding artificial reefs) is considered, which is one reason explaining the biomass increase in this habitat. This shows the importance of precise and strategic zoning in determining access rules in MPAs. This is also due to the densification of existing villages of artificial reefs and the creation of new villages in the rocky habitat. Three new hypotheses could be further tested: (i) maintaining the fishing effort at its 2018 level, (ii) increasing the introduction of target species, and (iii) enhancing the functioning of the trophic chain by reintroducing keystone species rather than target species of fishing?
Finally, scenario 3, "Enhancing diving site access", aimed at increasing eco-tourism. Simulation results indicated that the main objective of the scenario is not achieved since diving access is restricted by 100% and 91% respectively in the coralligenous and rocky habitats, that host most currently appealing diving sites. At the same time, living biomass (and total biomass) decreases more than in scenario 1 (-0.6%) and less than in scenario 2 in the same climatic context of primary production reduction, reflecting the difference in FPA cover of the different scenarios. Interestingly, despite taking for granted the loss of historical ecosystems and traditional economic activities, and including primary production reduction, the total biomass increases by 0.12% in the rocky habitat, which is again a better score than what scenario 1 reached. Finally, fished biomass lowers by 14%, due to a 10% FPA's share, which is in accordance with a narrative that promotes the creation of alternative economic activities.
Scenario 3 is the scenario that produces the most impressive results since diving site access was in sharp decrease, whereas the scenario was supposed to favor it. These results' explanation lies in a contradiction between the assumptions of the narrative. In fact, by placing 10% of the territory under full protection and locating these areas on sites of high biodiversity, FPAs are located on the very sites favored by divers. This contradiction between the goal of this narrative and the restricted access to FPA proves to be a determining factor in the success of the scenario. Retrospectively, this may seem obvious, but the exact delimitation of access rules to protected areas remains a hot topic. This scenario is of high interest because it illustrates an actual dilemma and confirms scenario 2's analysis that access rules need to be aligned and defined with precise and strategic zoning. Other hypotheses to be tested include allowing recreational diving access to FPA, while extractive activities remain prohibited.
Fig. 3 The Gulf of Lions marine protected area food web. A snapshot of the trophic flows in the ecosystem during a given period describing the ecological functioning of the Natural Marine Park of the Gulf of Lions.
---
DISCUSSION
Our analysis highlights the usefulness of using a three-step (plus one) framework, hybridizing a collaborative modeling approach and a decision-making process (Fig. 1) as a way to identify both the future desired for an MPA and the pathways to get there. Similar collaborative approaches have been developed by the Commod community 18 . A Commod-type project can focus on the production of knowledge to improve understanding of the actual SES, or it can go further and be part of a concerted effort to transform interaction practices with the resource or forms of socio-economic interactions 18 . Ours is original as it aims not only to share a common understanding of the SES at present and help solve current challenges, but also to anticipate and create a shared future. Indeed, the proposed framework allows discussion of hypotheses concerning the future of the management area, which enables the reshaping of our thinking and the potential framing of new strategies. The framework acts as a dialog space for people concerned with SES and willing to support the implementation of management plans. This dialog space offers the possibility to realize that there is a difference between expectations or likely effects of management options and the complexity of reality. Indeed, the simulation results only sometimes illustrated the expected effects of the narratives. In this respect, our method paves the way for questioning beliefs, which did not occur in previous similar studies 10 . It contributes to moving to informed-based strategies, as recommended by Cvitamovic et al. 41 . The sciencepolicy future experiments we conducted considered placebased issues, participants knowledge, and imaginaries. Scientists coming from ecology and social sciences, decision-makers, and other MPA stakeholders all found the approach to be groundbreaking; by opening the box of scriptwriting, involved stakeholders experienced a way to construct new narratives and broaden solutions for ocean use, as advocated by Lubchenco and Gaines 38 . However, such an approach must be taken cautiously, as it is time-consuming for all participants. At the beginning of the project, participants shared concerns about the usefulness of a prospective approach not connected to a real political agenda. The mobilization of tools during the workshops (see Methods, Prospective workshops) was beneficial to show how much the approach was anchored in reality, and allowed for creating a common ground. In the end, most participants underlined how instructive it was to meet with each other and exchange viewpoints on challenges concerning the future of the MPA rather than being consulted separately as it usually happens.
Another interesting point is that the proposed framework fosters anticipatory governance capacity by testing assumptions, understanding interdependencies, and sparking discussions. It avoids policymakers acting in their own jurisdiction generating spillovers that modify the evolutionary pathways of related SESs or constraining the adaptive capacity of other policymakers 42 . Lack of coordination between policy actors across jurisdictions and incomplete analysis of potential cascading effects in complex policy contexts can lead to maladaptation 42 . In this regard, our framework can contribute to understanding the marine space as a "commons" 43 and to resolving issues facing an MPA as a decentralized governance institution. Marine parks are social constructs that must build on historical legacy and be invested with new commonalities to become legitimate and formulate acceptable, sustainable policies (see Supplementary Note 1).
The framework also allowed to collaboratively explore the impacts of alternative management scenarios on marine SESs considering climate change, identifying benefits and beneficiaries, and resulting trade-offs among ecological functions supporting them. This experience led to interesting conclusions from the simulation results themselves. The latter showed that co-benefits may arise and be favored by a precise and coherent system of rules of access and uses complementing a more physical, biological, and ecological set of measures. Our findings showed that some trade-offs might satisfy several objectives, even if not those targeted first, opening the way to potential co-benefits, as shown by 10 . For instance, the strong protection extension in scenario 1 changed each species' biomass distribution within each ecosystem, improving the biomass of some important fished species and opening avenues to search for "win-win" strategies. Similarly, measures allowing us to cancel the negative effect of climate change on primary production proposed in scenario 2 would increase the total biomass together with maintaining biodiversity in scenario 1.
More generally, this research developed a companion modeling framework that would enable us to move forward in the search for win-win strategies by pairing strategic zoning of high protection and access rules. As far as we know, the co-designed model we developed is the only agent-based model combining collaborative and ecosystem-based modeling that can be used as a lab experiment to identify co-benefiting strategies in marine spaces. Nevertheless, some improvements are needed. There are avenues insofar as the model suffers from shortcomings. The first difficulty faced in the modeling exercise was the mismatch between spatial scales of ecological and climate modeling. While the former operates at the habitat scale (1 km2), the latter provides smoothed environmental variables at a resolution larger than 50 km2, unresolving taking into account thresholds leading to life cycle bottlenecks for instance. The latter points to the need to downscale climate projections at relevant scales for ecosystem functioning. Other concern relies on improving the modeling tool by describing spatiotemporal dynamics arising from the spillover of marine organisms 44 , the resilience brought by population connectivity 45 , and the relocation of human activities 46 . Proceeding to a sensitivity analysis, or building alternative outputs indicators, allows disentangling and clarifies the different modeled effects inside each scenario. There is a tension here between rewriting scenarios and pertaining their collaborative scriptwriting which led to the scenarios implemented, very meaningful about the richness of the stakeholder's engagement.
Finally, marine management should be an inclusive, iterative process, where modeling acts as an ongoing exploratory experiment to identify the conditions under which co-benefits and win-win strategies can be realized. Hence, the modeling process facilitates interactions between participants in a transparent and open process. One can thus imagine working sequentially until satisfactory results are obtained for any stakeholder involved. This search for a hybridized collaboration framework in the construction of policies proves particularly fruitful in creating a shared future and looking for sustainability.
Fig. 4 Simulation results for the scenario 1 (S1), scenario 2 (S2), scenario 3 (S3) in each ecosystem. Those three scenarios correspond to the narratives that emerged from stakeholders' groups (SG) (see Table 1). Scenarios 1 and 3 make no special provision for the effects of climate change and therefore include an assumption about the effects of climate change (CC) in the form of reduced primary production. They are rated "+ CC". On the other hand, narrative 2 and thus scenario 2 provides for combating climate change, therefore it does not include an assumption regarding the impacts of climate change and is scored "no CC". S1 + CC: enhancing total biomass with primary production decreasing due to climate change. S2 no CC: enhancing harvested biomass without decreasing primary production. S3 + CC: enhancing diving sites access with primary production decreasing due to climate change. a Evolution of the total biomass representing the evolution of the sum of biomass of all species in each ecosystem between 2018 and 2050. b Evolution of the living biomass representing the evolution of the difference between the total biomass and the fished species biomass in each ecosystem between 2018 and 2050. c Evolution of the fished biomass representing the evolution of the sum of biomass of all fished species in each ecosystem between 2018 and 2050. d Evolution of the diving sites access in each ecosystem between 2018 and 2050. e Evolution of the share of fully protected areas in each ecosystem between 2018 and 2050.
---
METHODS
---
Prospective workshops
Each of the three groups focused on fostering one of the three ecosystems' functions considered: production of total biomass, fish stock level for fishing activities or potential access to diving sites. They allow us to work on interactions between biodiversity conservation and economic development. Proxys used and related to these ecosystem functions are also aligned with the ones used in the park management plan, which helps for sciencepolicy dialog.
To reach the objectives of the narratives, participants were especially requested to give indications about considering climate change impact or not, fishing effort evolution, spatial sea-users' rights (FPA), facilities planning (artificial reefs, floating wind turbines, harbors and breakwaters, multipurpose facilities) and ecological engineering (reintroduction of species), the main features of the social-ecological representation on which we all agreed (see Fig. 2). In order to help envisioning disruptive changes, we decided to draw on possible future land/sea-scapes of the MPA. Here land/sea-scapes are understood in several aspects: coastal viewpoint, marine natural or artificial habitats, above/undersea marine space occupation by humans and nonhumans. To do so, we introduced visual tools during the prospective workshops (see Supplementary Note 1):
(i) an archetypal map of the MPA including typical features to recall main territorial issues without being trapped in too specific considerations: a city by the sea, the mouth of a river, an estuary, a rocky coast, a sandy coast;
(ii) tokens related to the available means to reach the narratives' objectives: ecosystem status (primary production), fisheries evolution (fishing effort), facilities planning & ecological engineering (esthetical artificial reefs, floating wind turbines, harbors and break walls, reintroduction of species), sea-users' access and regulation (recreational uses and fully protected areas). Tokens were used to inform participants about the localization and intensity of each item, which helped shape the participant's vision of the future and link with the simulation model.
(iii) cards describing real-world examples of what tokens stand for. They were used to broaden the participants' thinking scope by introducing stories in foreign places and at different times. Here, they helped illustrate alternative options among scenarios.
---
Overview of end-to-end models
End-to-end models represent the different ecosystem components from primary producers to top predators, linked through trophic interactions and affected by the abiotic environment 47 . They allow the study of the combined effects of fishing and climate change on marine ecosystems by coupling hydrodynamic, biogeochemical, biological and fisheries models. Some are suited to explore the impact of management measures on fisheries dynamics with an explicit description of fishing stocks' spatial and seasonal dynamics, fishing activities and access rights (ISIS-Fish) [48][49][50] but they do not represent environmental conditions or trophic interactions, so their capacity to simulate the impact of fisheries management on ecosystem dynamics and possible feedbacks is limited.
Others explicitly model trophic interactions between uniform ecological groups with biomass flows based on diet matrixes (Ecopath with Ecosim 51 , Atlantis). They rely on the assumption that major features of marine ecosystems depend on their trophic structure; thus, there is no need to detail each species to describe the state and dynamics of the ecosystem. They can be used to explore the evolution of the system under variations in biological or fishery conditions but may lack flexibility to simulate regime shifts due to radical variations in such conditions. Some others do not set a priori trophic interactions, which are considered too rigid to explore the nonlinear effects of both fishing and change in primary production. They describe predation as an opportunistic process that depends on spatial co-occurrence and size adequacy between a predator and its prey (OSMOSE). Due to the simulation of emergent trophic interactions, it is particularly relevant to explore the single or combined effects of fishing, management and climate change on ecosystem dynamics. However, they do not properly describe fisheries dynamics (fixed fishing mortality) and must be coupled with fleet dynamics models (dynamic effort allocation) 52 .
---
Ecosystems description
First, we selected three publications describing the specific ecosystem functioning associated with marine park habitats: the Mediterranean seagrass ecosystem 53 , the coralligenous ecosystem 54 and the algae-dominated rocky reef ecosystem 55 .
Second, we selected two publications using the same massbalance model (EwE) to analyze the overall ecosystem structure and fishing impacts in the Gulf of Lion 56 and the northwestern Mediterranean Sea 57 . They both provide a snapshot of the trophic flows in the ecosystem during a given period, which is based on a consistent set of detailed data for each group of species: biomass density, food requirements (diet matrix), mortality by predation and mortality by fishing. The former focuses on the Gulf of Lion but depicts a larger area than that of the park in terms of distance to the shore and especially depth (-2500 m against -1200 m). Thus, the rocky reef ecosystem that exists within the park is "masked" by the prevalence of sandy/muddy habitats. The latter depicts a wider part of the Mediterranean Sea but is comparable to the park in terms of depth (-1000 m against -1200 m) and provides useful information on the rocky reef ecosystem.
Each ecosystem represents the following proportion of the whole system: muddy = 85.57%, sandy = 12.23%, rocky = 1.75, posidonia = 0.23 and coralligenous = 0.22%. For "rocky", "posidonia" and "coralligenous", we selected corresponding ecological groups and associated data (Ewe) from functional compartments (EBQI). For "sandy&muddy", we created an ad hoc conceptual model of the ecosystem functioning from the Gulf of Lion trophic chain (Ewe).
---
Food-web modeling
For each related group of species, the variation in the average density results from the equal combination of two potential drivers on a yearly basis: the abundance of prey (bottom-up control, positive feedback) and the abundance of predators (top-down control, negative feedback). To do so, we use data from the EwE publications listed above: biomass density, food requirements (diet matrix), mortality by predation and by fishing (see Supplementary Tables 11121314). For one species, the White gorgonian (Eunicella Singularis), we use site-specific data produced during the RocConnect project (http://isidoredd.documentation.developpementdurable.gouv.fr/document.xsp?id=Temis-0084332).
To model the effect of prey abundance on their predators, the biomass of each group of species is described as the sum of its annual food requirements, detailing each prey (see Supplementary Tables 1234). While nothing happens to a prey species, there is no change in prey abundance, and the biomass of each predator species remains the same. If anything happens to a prey species, this translates into that species density, which then reflects its availability for feeding predators and eventually affects the biomass of predator species. The effect on the biomass of predator species is proportional to the change in prey species density and to the specific weight of prey species in each predator's diet. In other words, the more prey there is at the beginning of the period, the more of its predators there could be at the end.
To model the effect of predator abundance on their prey, we follow the reciprocal reasoning of the above mechanism. Here, the biomass of each group of species is described as the sum of its annual catches by each other species (see Supplementary Tables 1234). Here again, while nothing happens to a predator's species, there is no change in predator abundance, and the biomass of each prey species remains the same. If anything happens to a predator's species, this translates into that species density, which is then reflected in its food requirements and eventually affects the biomass of prey species. However, this time, the effect on the biomass of prey species is inversely proportional to the change in predator species density and to the specific weight of predator species in each prey's mortality. In other words, the more predators there are at the beginning of the period, the less prey there could be at the end.
There are only two exceptions to this rule: phytoplankton and detritus. The production of phytoplankton relies on photosynthesis, which requires water, light, carbon dioxide and mineral nutrients. These elements are beyond our representation, so we impose the value of the phytoplankton biomass density at each time step. Additionally, the value of phytoplankton biomass density is the variable used to represent the expected effect of climate change on primary production. The production of detritus comes from three sources: natural detritus, discards and bycatch of sea turtles, seabirds and cetaceans. In other words, the amount of detritus depends on the activity of other marine organisms. Here, we model the amount of detritus as a constant share of the total annual biomass.
---
Rationale for ABM
Most studies on MPA analyze how they succeed from an ecological point of view 56 . Few others argue about the conditions under which they succeed from a socio-economical or cultural point of view (refs. 3,[58][59][60][61][62][63] ). Little work embraces both aspects of MPA [64][65][66] . Currently, agent-based models (ABMs) are convenient methods to integrate ecological and socioeconomic dynamics and are already used by researchers in ecology or economics for ecosystem management [67][68][69] . ABMs allow the consideration of any kind of agent with different functioning and organization levels 69,70 , including human activities, marine food webs and facilities planning. ABMs are also usually spatially explicit, which favors connecting with narratives that are spatially explicit too. Basically, an agent is a computer system that is located in an environment and that acts autonomously to meet its objectives. Here, environment means any natural and/or social phenomena that potentially have an impact on the agent.
For these reasons, ABMs are convenient methods to deal with SESs. The possibility of providing each kind of agent with a representation of the environment, according to specific perception criteria, is particularly interesting for applications in the field of renewable resource management 19 . The ABM developed for SES management usually integrates an explicit representation of space: a grid with each cell corresponding to a homogeneous portion of space. Time is generally segmented into regular time steps. The simulation horizon (total time steps) corresponds to the prospective horizon.
---
Modeling of drivers and indicators of ecosystem status
In the Mediterranean Sea, current scientific consensus outlines a reduction of the primary production and changes in species composition in the ecosystems as an effect of climate change. However, trophic network re-organization linked to these species' composition changes is still an open debate. Hence, to model the expected effect of climate change on the ecosystems Natural Marine Park of the Gulf of Lions, we build on IPCC projections that consider a 10% to 20% decrease in net primary production under low latitudes by 2100 due to reduced vertical nutrient supply 71,72 .
Indeed, combined consequences of climate change like water temperature increase and hydric stress act synergistically to reduce primary production. The former reinforces stratification of surface waters resulting in a reduction in the supply of nutrients which leads to a decrease in primary production. The second also leads to a decrease in nutrients from the rivers to the sea impacting primary production Applied to our simulation horizon, this can be translated into an annual steady decrease of up to -4% in phytoplankton biomass density between 2018 and 2050.
To model fisheries, we use the same rule as to model the effect of predator abundance on their prey, but here, this represents the effect of fishing effort on harvested species. As our entry point is traditional small-scale multispecies fisheries, we do not directly modify fishing effort by species but rather by fishing gear 56 . A change in the fishing effort of a given fishing gear first affects the total biomass of its harvested species and then is allocated between each species after the fishing ratio from the base year. Thanks to the EwE publication on the Gulf of Lion and included data on landings by gear and by species 56 , we were able to distinguish 4 fishing gear: trawls, tuna seiners, lamparos (traditional kind of night-time fishing using light to attract small pelagics), and other artisanal fishing gear. It does not include recreational fishing. To spatialize fisheries, we do not associate each fishing gear with specific locations or habitats: fishing effort by fishing gear is the same all over the area, with two exceptions. The former refers to FPA areas where any kind of fishing is forbidden (Cerbère-Banyuls Natural Marine Reserve). The latter refers to trawls and artisanal fisheries whose activity is constrained by practical or legal concerns. First, it is known that artisanal fisheries work mostly near the coast up to a maximum distance of 6 nautical miles and a maximum depth of -200 meters. Second, trawls are prohibited between 0 and 3 nautical miles (2013 Trawl Management Plan). Here, we do not model transfer effects between sites or towards new sites.
To model diving, we use a GIS layer indicating the most popular diving sites in the park. With each diving site, we associate an annual number of visitors that fits known trends. Here, changes in diver attendance depend on the extent of fully protected areas prohibiting this practice. Here, again, we do not model transfer effects between sites or towards new sites.
To model FPA and access rights, we use a GIS layer indicating the boundaries of the existing FPA (Cerbère-Banyuls Natural Marine Reserve). There fishing is prohibited. To model the creation of the new FPA, we target important natural areas. To do so, we use a GIS layer corresponding to a map from the park management plan that indicates important natural areas (see Supplementary Fig. 1a,b). More precisely, the map scales areas after their natural value using a "heat gradient" (see management plan for details). To reach the level of protection expected in each scenario, we downgraded the level of natural value required to be designated an FPA every 5 years between 2020 and 2030. Here, these levels of natural value are chosen to get closer to the expected level of protection. Areas to be protected are designated after their natural value, but the rules of attribution slightly change among scenarios. When protecting a large portion of the MPA (scenario 1, Supplementary Fig. 2), there is no need to first target a specific area: one is sure that all areas of great natural value will be included in the protected perimeter. Here, we seek to make progress on the overall MPA, and the only criterion to be designated a protected area refers to the level of natural value. When protecting a small portion of the MPA (scenario 2), one may want to make sure to protect consistent areas of great natural value rather than sparse micro hot points. To do so, we target the existing Marine Reserve and let new protected areas develop in its surroundings. When protecting a medium portion of the MPA (scenario 3), we use a combination of the two previous rules: in 2020, we target the surroundings of the Marine Reserve to be sure to protect this area of greatest natural value, while in 2025 and 2030, we also let protected areas develop elsewhere, after the local level of natural value. Concerning access rights, fully protected areas were intended as "no go, no take" zones/integral reserves during the workshops. Thus, we prohibit fishing and diving in the corresponding perimeters.
To model facilities planning, we select artificial reefs and floating wind turbines. We do not represent harbors and break walls, as they were much likely associated with sea level rise during the workshops. This is a major issue but beyond the scope of this ecosystem-based modeling.
To model ecological engineering and artificial reefs implementation, we use a GIS layer indicating their location, and we assume that they are comparable to natural rocky reefs 73 . Thus, existing artificial reefs are associated with the same food web as the Rock ecosystem cited above. According to expert opinion, the occupancy rate of existing artificial reef villages inside the park is ~12%. To model their densification, we impose a steady annual increase in the biomass of each species until it reaches the equivalent of a 50% occupancy rate by 2050. To model the installation of new reefs in new villages, we replace a portion of sandy habitat with rocky habitat corresponding to an occupancy rate of 50%. Then, we describe a three-step colonization by marine organisms: (i) a pioneer phase of 1 year with the development of phytoplankton, zooplankton, detritus, macroalgae and worms; (ii) a maturation phase of 2-5 years with the development of suprabenthos, gorgonians, benthic invertebrates, sea urchins, octopuses and bivalve gastropods; (iii) a completion phase after 5 years, with the development of salema, sparidae, seabream, conger, seabass, scorpion fish, and picarel 73 .
To model floating wind turbines, we create a GIS layer from a map used by the management team of the park to initiate debates with stakeholders on possible locations of already approved experimental turbines and possible new commercial ones. During the workshops and the project team meetings, two possible adverse effects of floating turbines on the ecosystem were discussed. Some determined that the floating base and the anchorages would have a sort of "fish aggregating device" effect, while the location area would be prohibited from fishing. Other thought antifouling paint would prevent such an effect, while ultrasounds due to the functioning of turbines would trouble cetaceans. Here, we do not model these alternative effects because of time constraints and lack of scientific evidence and data to our knowledge. We model their possible progressive development every five years between 2020 and 2045 around the "overall" and "most acceptable" areas designated by the map using a propagation rule in the surroundings of already approved experimental turbines.
To model multipurpose facilities, we add attendance indicators to artificial reefs and floating turbines in some cases. In scenarios 2 and 3, the development of a commercial wind farm is associated with the development of a touristic dedicated activity consisting of sea-visiting the area, explaining its purpose and possible effects on ecosystems. With each turbine, we associate an annual number of visitors deduced from assumptions on the number of opening days by year, number of visits by day, and number of passengers by visit. Here, visitor attendance follows from the development of a commercial wind farm. In scenario 3, a few artificial reefs are developed with both ecological and esthetic concerns and are associated with the development of a dedicated diving activity. With each reef, we associate an annual number of divers deduced from assumptions on the number of opening days by year, number of visits by day, and number of divers by visit. Here, visitors' attendance follows from recreational reef development. Two esthetic artificial reef villages are being developed in 2025 and 2035.
To model the reintroduction of species, we focus on one heritage species in scenario 1 (grouper) and on two commercial species in scenario 2 (seabass and dentex). Concerning sites of reintroduction, we targeted rocky ecosystems and specifically existing artificial reef villages. Each year between 2020 and 2025, we repopulate from juveniles and adult individuals expressed in biomass equivalents. Here, priority is given to meeting the food needs of reintroduced species, corresponding to their estimated biomass levels, even if at the expense of the already established species. As biomass levels of reintroduced species are of the same order as those of top predators already represented in the rock ecosystem, this hypothetical situation calls for a later more complex representation of their competition for food.
---
DATA AVAILABILITY
The data that support the findings of this study are available in the Supplementary Materials.
---
CODE AVAILABILITY
The code that supports the findings of this study is available on GitHub at: https:// github.com/elsamosseri/SAFRAN.
---
AUTHOR CONTRIBUTIONS
All authors wrote and reviewed the main text. C.B., A.S. and X.L. designed Fig. 1. E.M. and X.L. designed Figs. 2 and
---
COMPETING INTERESTS
The authors declare no competing interests.
---
ADDITIONAL INFORMATION Supplementary information
The online version contains supplementary material available at https://doi.org/10.1038/s44183-023-00011-z.
Correspondence and requests for materials should be addressed to C. Boemare.
Reprints and permission information is available at http://www.nature.com/ reprints Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 57,866 | 881 |
3310d805c6c4196eefd9eb250ad5d6a66c329fc1 | Exposure of Children to Unhealthy Food and Beverage Advertisements in South Africa | 2,021 | [
"JournalArticle"
] | Television (TV) is a powerful medium for marketing food and beverages. Food and beverage marketers tend to use this medium to target children with the hope that children will in turn influence their families' food choices. No study has assessed the compliance of TV marketers with the South African Marketing to Children pledge since the enactment of the 2014 food advertising recommendations by the South African Department of Health and the Advertising Standards Authority. This study investigated the extent and nature of advertising of unhealthy versus healthy food and beverages to children in South African TV broadcasting channels. The date, time, type, frequency and target audience of food advertisements (ads) on four free-to-air South African TV channels were recorded and captured using a structured assessment guide. The presence of persuasive marketing techniques was also assessed. Unhealthy food and beverage advertising was recorded at a significantly higher rate compared with healthy food and beverages during the time frame when children were likely to be watching TV. Brand benefit claims, health claims and power strategies (e.g., advertising using cartoon characters and celebrated individuals) were used as persuasive strategies. These persuasive strategies were used more in unhealthy versus healthy food ads. The findings are in breach of the South African Marketing to Children pledge and suggest a failure of the industry self-regulation system. We recommend the introduction of monitored and enforced statutory regulations to ensure healthy TV food advertising space. | Introduction
Childhood overweightness and obesity prevalence have increased at an alarming rate and have become most serious global health challenges [1]. South Africa is not immune to these childhood health challenges in that, according to the outcomes of the South African National Health and Nutrition Examination Survey (SANHANES-1), the prevalence of overweightness and obesity in children aged 10 to 14 years are 12.1 and 4.2 percent, respectively, while in those children aged 15 to 17 years are 13.3 and 4.8 percent, respectively [2]. Unhealthy food and beverage advertising on television (TV) has been implicated in the development of childhood overweightness and obesity [3]. For instance, TV food and beverage advertising exposure has been shown to influence the amount of food that children who watch a lot of TV consume [4,5]. The most advertised food and beverages on TV tend to be high in fat, sugar and salt and low in essential minerals, vitamins, amino acids and fibre [6]. Children seem to also be more susceptible than adults to the persuasive approach used by TV marketers when advertising food and beverages [4,7]. Hence, children who watch food and beverage advertisements (ads) tend to choose these foods and beverages thinking they are healthy with less interest in knowing their nutrient content [7,8].
Food marketers have typically used a mixture of techniques to increase children's desire for unhealthy food and beverages [9,10]. An example of these techniques is the use of misleading claims that portray specific food and beverages as bringing about enhanced performance (e.g., in sport, at schools) [9]. Others have utilised cartoon-related characters that are known to increase brand recognition among children [11]. Advertisements also portray people who make unhealthy food choices to appear to have desirable outcomes [12].
Given the extensive evidence of the negative impact of food advertising on children, the World Health Organisation (WHO) advocated a control of TV food marketing especially with regard to marketing directed towards children [13,14]. This could support the creation of a food environment that promotes healthy dietary choices. The WHO also proposed that countries should develop and adopt policies to control the marketing of food towards children with a specific emphasis on the reach, frequency, creative content, design and execution of the marketing message [14]. The WHO has argued that policies initiated against the negative influence of marketing such as TV food advertising need to be comprehensive to be efficient [14].
In South Africa, the number of peer-reviewed studies investigating children's exposure to unhealthy food ads is limited. A study conducted by Van Vuuran between 2003 and 2005 [15] estimated that children were exposed to a daily average of 24 minutes of advertising. Subsequently, the South African government proposed a better control of TV food advertising to children in response to the call by the WHO [16]. This led to the development of a code for advertising by the Department of Health and the major food corporation's consortium [17,18]. The food and beverage advertising code was formally initiated by the Advertising Standards Authority [ASA] in 2008. This code led to a pledge (i.e., the South African Marketing to Children pledge) to adhere to the code signed by the members of the major food corporations in 2009. The core principle was to publicly pledge "to commit to marketing communications to children who are twelve years old and under, to promote healthy dietary choices and healthy lifestyles" [17]. This pledge, as the sole form of regulation to control food ads, put South Africa in the group where food industries regulate themselves (i.e., self-regulation). This type of regulation is widely known for non-compliance by food marketers [10,19].
It is therefore not surprising that even after signing the pledge, studies found a poor adherence to the TV advertising guidelines in South Africa with major infringements of the pledge being identified [20][21][22]. According to the aforementioned studies, unhealthy food advertising continues to be prevalent in South Africa. The most frequent ads shown during periods when children are likely to be watching TV are for desserts and sweets, fast foods, hot sugar-sweetened beverages, starchy foods and sweetened beverages as found by Mchiza et al. [21]. Additionally, 67% of alcohol-related ads are shown during family viewing time [21].
Based on these findings by Mchiza et al. [21], the following recommendations were made to the Department of Health (DoH) and the ASA of South Africa in 2014 [23]:
1.
The prohibition of the advertising of foods and beverages high in fat, sugar and salt following the World Health Organization (WHO) recommendations; 2.
The prohibition of alcohol ads especially when children are watching; 3.
The restriction of the use of advertising techniques that appeal to children. Ads should not use cartoon characters and/or animations or include promotional offers and gifts or tokens.
No research has assessed the rate of advertising of unhealthy food and beverages on children since the enactment of the 2014 food advertising recommendations by the South African DoH and ASA. Furthermore, no study has investigated the compliance of food marketers with the South African Marketing to Children pledge [17]. To address these gaps, this study investigated the extent and nature of advertising of unhealthy versus healthy food and beverages to children by the major South African TV broadcasting channels.
---
Methods and Procedures
The categories and techniques employed in this study were adapted from the International Network for Food and Obesity/Non-Communicable Diseases Research, Monitoring and Action Support (INFORMAS) module relating to the monitoring and benchmarking of unhealthy food promotion to children [24]. This approach was evaluated as adequate to assess the frequency and level of exposure of population groups (especially children) to food promotions, the persuasive power of techniques used in promotional communications and the nutritional composition of promoted food products. The South African Nutrient Profiling Model (SA-NPM) was used for the contextual adaptation of the tool [25].
---
Channels and Time of Broadcasting
Food ads were recorded from the four major South African TV channels. Recordings were done from 15:00 to 19:00 (i.e., 4 h) for seven consecutive days from 23 April 2017 to 29 April 2017. This resulted in a total of 112 broadcasting hours for the four stations together.
---
Description of the Target Audience of Broadcasting Channels
Television in South Africa is funded from license fees and advertising and broadcasts on four free-to-air channels (South African Broadcasting Corporation (SABC) 1, 2, 3 and Enhanced Television (e-TV)) with a mixed entertainment and public service mandate. According to the SABC segmentation, all four TV channels focus on the same target audience during the following intervals. From 15:00-17:00 hours they target children; during this time, the child-focused programs shown are infomercials, educational programs and cartoons. Following this time, from 17:00-19:00 hours, the target becomes the whole family including children. During this period, talk shows and soap operas are shown (Table 1). For the sake of the current study, these two time periods form the stipulated period when children are expected to be part of the TV audience.
---
Selection and Coding Procedures
The data were collected manually by recording the live video broadcasted on the four TV stations concurrently within the stipulated period. A TV tuner (WinTV tuner), a Windows Media Centre compatible with Windows 10 and storage devices were the tools employed to carry out the task. Coding was done by two independent researchers (a nutrition expert with a PhD in nutrition and dietetics and a postgraduate researcher with a Master of Public Health specializing in nutrition). Both researchers independently viewed a playback of the recorded videos one TV station at a time. In the case of any disagreement, recoding was done until 100% agreement was reached. No distinction was made between unique and repetitive ads.
Ads were selected if they fell into one of the following categories and were coded accordingly: (i) healthy food or beverage, (ii) unhealthy food or beverage, (iii) neutral (Table 2). Healthy foods were defined as core foods that are nutrient-dense and recommended for daily consumption. Unhealthy foods were non-core foods that are high in undesirable nutrients such as fat, refined sugars and salt. The neutral category consisted of food and beverage-related items that could not explicitly be labelled as healthy or unhealthy such as baby and toddler milk formula, tea and coffee. For each ad, the following information was collected: (i) television channel (TV station being recorded); (ii) name and type of program in which the ad was shown; (iii) date and time of the day when the ad was shown; (iv) assumed target audience of the ad; (v) company placing the ad; (vi) description of the product advertised; (vii) brand benefit claim (claims other than those relating to health that were directed towards developing positive perceptions about a company's product that might influence brand attachment), if any; (viii) description of health claim (any claim of the food or its constituent having an effect on health or being healthy or having a nutritional property) [24], if any; (ix) power strategy (this could be a promotional character or event or person employed to increase the persuasive power of an advertisement) [24], if any; (x) duration of the ad. Brand benefit claims, health claims and power strategies constituted the persuasive techniques that were studied. This study focused only on child and family viewing times.
---
Statistical Analysis
A descriptive analysis was done using measures of central tendency, standard deviations (SD) and ad rates for different ad subgroups (e.g., TV channel, viewing time). The differences between the healthy and unhealthy categories were calculated using a 1-sample proportions test with a Yates's continuity correction. For small samples, an exact binomial test was used. The analysis was done using R software [26].
---
Ethical Considerations
The Humanities and Social Science Research Ethics Committee of the University of the Western Cape approved the methodology and exempted the ethics of the current research (Reference Number: HS19/6/6). The project is also registered with the University of the Western Cape's Higher Degrees Committee. The data did not include any personal information.
---
Results
---
General Description
A total of 1629 ads were shown on the four TV channels combined of which 582 (35.7%) related to food and beverages. This corresponded to an average advertisement rate of 5.2 ads/c-h. Unhealthy food/beverage items constituted more than half (342: 58.8%) of the total food/beverage-related ads followed by neutral ads (150: 25.8%) and healthy food/beverage ads (90: 15.5%). The mean duration of the ads was 29.4 s and ranged from 6 to 45 s.
The highest ad rate (6.1 ads/c-h) was recorded during family viewing time (Table 1). Unhealthy foods were advertised more than three times as often as healthy foods for the child viewing time and more than four times as often during family viewing time. The rates of ads for unhealthy food and beverages were significantly higher (p < 0.001) than those for healthy food and beverages during the child and family viewing times.
---
Food and Beverage Advertisements during Child and Family Viewing Time
During child and family viewing time (Table 2), supermarket-related ads with only unhealthy foods advertised appeared most often (0.66 ads/c-h). This was followed by fast food-related ads with unhealthy and neutral options advertised (0.55 ads/c-h). Alcohol was advertised at 0.25 ads/c-h. Of the neutral category, ads about vitamin/minerals or other dietary supplements and sugar-free chewing gum had the highest rate (0.43 ads/c-h) (Table 2).
South African Broadcasting Corporation 3 and e-TV had higher advertising rates than the other channels (7.4 ads/h and 5.3 ads/h, respectively) (Table 3). Unhealthy foods were advertised significantly more often than healthy foods especially by e-TV. The proportion of ads with a brand benefit claim was 96%. Moreover, ads frequently used more than one brand benefit claim. As shown in Table 4, overall, brand benefit claims were used at an average rate of 3.1 claims per channel-hour (claims/ch-h) for healthy foods and 10.1 claims/ch-h for unhealthy foods. Claims promoting children or family as the users of the product, emotive claims and claims using sensory-based characteristics were the top three claims amongst healthy and unhealthy categories (Table 4). The rates of using brand benefit claims were significantly higher (p < 0.001) in unhealthy versus healthy products in all categories with the exception of the Suggested use (great for lunchboxes) category. The proportion of food ads with health claims was 45%. A few of these ads made more than one health claim. The total rate of health claims recorded for the unhealthy food category (1.7 claims/ch-h) was significantly higher than the rate for the healthy food category (1.2 claims/ch-h) (Table 5). The claim of the product containing a health-related ingredient was most frequently used (0.7 claims/c-h) in unhealthy foods. The rates of using heath claims were significantly higher (p < 0.001) in unhealthy than healthy products for the Health-related ingredient and Nutrient comparative claim categories. Finally, 34% of ads used power strategies. A few of these ads used more than one power strategy (Table 6). Power strategies were more significantly used in ads on unhealthy food (2.2 power strategies per channel-hour) than in ads on healthy foods (0.3 power strategies per channel-hour). Celebrity endorsement, cartoon/company owned characters and promoting the food as being child tailored (e.g., using an image of a child) were the most used power strategies. The rates of using power strategies were significantly higher in unhealthy than healthy food products where celebrated individuals and sports events were used (p < 0.01).
---
Discussion
This study investigated the exposure of South African children to unhealthy food and beverage ads. We identified 582 ads for food and beverages within the child and family viewing time. The overall rate of food and beverage-related ads was found to be 5.2 ads/ch-h. The four free-to-air TV channels advertised unhealthy foods at significantly higher rates than the healthy foods. Brand benefit claims and power strategies had significantly higher rates of use in unhealthy than healthy food ads.
---
Advertisements for Unhealthy Food and Beverages during Child and Family Viewing Times
There were almost four times as many ads for unhealthy (342: 58.8%) compared with healthy foods (90: 15.5%). This may predispose children who watch TV to choose unhealthy foods that are high in fat, salt and sugar due to their vulnerability to TV ads [5,6]. This violates the South African Marketing pledge that suggests a commitment by industry not to market food to children unless the aim is to advocate healthy dietary choices [17]. This is an indication that children in South Africa are at an increased risk of exposure to unhealthy food ads. Studies conducted in other countries have also found unhealthy foods to be proportionally more advertised than healthy foods. In Turkey, for instance, the number of ads for fast foods and beverages was found to be significantly higher than that for healthy food products [27]. In Thailand, the average ad rate of unhealthy food was also shown to be 2.9 ads/ch-h compared with the 0.2 and 0.9 ads/c-h for healthy and neutral categories, respectively [28].
Of particular concern were alcohol ads, which occurred at a rate of 0.25 ads/c-h during this period. This outcome is in violation of the SA DoH and ASA guidelines where it is clearly highlighted that no alcoholic beverages are to be shown when children are supposed to be viewing television [23]. Anderson [29] also argues that young people may be particularly susceptible to alcohol ads as they shape their attitudes, perceptions and expectancies about alcohol use. Indeed, Ausstin and Nach-Ferguson [30] found that children aged 7 to 12 years who enjoyed the alcoholic beverage ads to which they were exposed were more likely to try these beverages. Showing alcoholic beverage ads that may be appealing to children (e.g., by using celebrities and popular individuals) will more likely trigger them to become alcoholic beverage drinkers [30].
Another source of concern were the high rates of ads on sugar-sweetened beverages (SSB) because of the well documented detrimental effects they can have on children [31][32][33][34]
---
Persuasive Techniques
Persuasive techniques found in ads for all three food categories included power strategies, brand benefit claims and health claims. These techniques may be misleading as they may promote unbeneficial effects or mask harmful effects, which is of specific concern when it comes to unhealthy food [9,10]. Ads identified in the current study also carried various brand benefit claims (e.g., emotive claims, puffery) and power strategies (such as referring to famous sportspersons) that may make them more appealing. Power strategies were markedly employed to promote unhealthy foods more than healthy foods. Ads used, for example, the image of a child (child tailored) and non-sports celebrities as power strategies to promote unhealthy foods during this study. The use of cartoon characters and celebrated individuals to promote unhealthy food and alcohol are not new phenomena in South Africa. Mchiza et al. [21] had previously noted that, in 2010, 10% of the alcoholic beverage ads were shown on South African TV when children and family were supposedly watching. Mchiza et al. [21] also reported that these ads were promoted with the help of celebrated individuals such as movie actors, sportsmen and TV personalities. Delport [22] highlighted that techniques such as the use of cartoon characters are employed to create imagery of fun and excitement that appeals to children.
Oyero and Salawo [9] assert that the use of health claims when advertising unhealthy food represents a derogation of the importance of healthy foods. With the lack of intellectual capacity and skills to deal with the appeal of these messages [35], children are even more likely to fall for this deception and may easier accept these false health claims as the truth. This may shape the way they see what is healthy and unhealthy and may engrain misconceptions in their brains about what is healthy while fostering unhealthy eating habits.
Brand benefit claims were another persuasive technique utilised to advertise both healthy (3.1 claims/ch-h) and unhealthy food (10.1 claims/ch-h). Brand benefit claims have previously been used in South Africa, particularly those brand benefits that portray fun [21,36]. Mchiza et al. [21] found ads for desserts, sweets and sugar-concentrated beverages to contain portrayals of exaggerated pleasure sensations such as depictions of lovely taste, fun and addictive sensations. Pengpid and Pelzer [36] found similar claims and others such as improving one's social worth and status. According to Harris et al. [12], the use of fun and excitement imagery in food ads has caused an increase of consumption of food in those being exposed. Repetitive exposure to these brand benefit claims tends to lead to the development of a relationship with the brand [14], which can be exploited by marketers of unhealthy foods.
---
Policy Implications
The South African Marketing to Children pledge makes it clear that there should be no use of celebrities and licensed characters (such as cartoons) in advertising unhealthy foods to children [15]. The food and beverage advertising codes (which the food companies submitted to through the pledge) asserts that children are easily influenced and so they should not be misled with false or exaggerated advertising claims [17]. Signees of the pledge are admonished to be honest in their ads and not to take advantage of the lack of experience of children or knowledge in advertising foods to them. Thus, claims such as the emotive claims recorded in this study go against the social values of advertising under the food and beverage advertising codes [17]. The common use of these strategies to advertise unhealthy foods as identified in our study violates the South African Marketing to Children pledge [17].
With the many violations of the food and beverage advertising codes and the South African Marketing to Children pledge, it appears that the outcome of the self-regulatory approach adopted by South Africa is unsatisfactory. The persistent flouting of these codes, as revealed in the current study, by Mchiza et al. [21] and by Delport [22] comes as no surprise as self-regulation around the world has proven to be ineffective in limiting unhealthy food advertising to children [26]. This unsatisfactory outcome emanates from the laxity in the enforcement of self-regulation codes [19], which could be attributed to the intention/drive among industry players to make profits. Self-regulatory policies make it appear as though advertising is being controlled while in reality all of these policies seem to do is to stifle change [19].
An introduction of statutory regulations in South Africa would signify a refreshing change in the food advertising environment. Additionally, strict monitoring and the enforcement of significant penalties may serve as a deterrent to companies and television stations who disregard the policies. The above policies have been shown to be effective in reducing unhealthy food ads to children [37]. New regulations should strictly control the use of persuasive strategies in unhealthy food ads. Educating food marketers on the importance of adhering to the policies for controlling food advertising may help bring about attitudinal change. A watershed period after which unhealthy food ads would be allowed could also be considered.
---
Strengths and Limitations of the Current Study
The strengths of the current study included the thorough and systematic assessment of ads based on a structured guide developed for international monitoring and benchmarking. The assessment also covered several domains of persuasive techniques.
The limitations included the limited scope of the current research in that the data captured were from the free-to-air TV channels only (those channels accessible to most children from disadvantaged communities) and were collected at a single point in time. As such, ads shown on other South Africa TV channels (especially those that are accessible to more affluent communities i.e., pay/subscription TV) were missed. It may therefore be impossible to generalise these data to other South African populations that have access to pay/subscription TV channels. This study was carried out on TV stations and, as such, may not adequately represent the nature of food ads in the wider food advertising space in South Africa that also includes social media ads, radio ads, etc. While the duration of the study captured periods where children are likely to be watching TV, there is the potential for children to be exposed to TV food ads outside of those hours included in this study. Therefore, the overall potential exposure for TV food ads could be higher than reported in this study. This study also did not investigate the causal effect of food advertising to South African children. For instance, in this study, only potential exposures could be assessed without accounting for the number of child viewers of these ads. As such, new research is needed that will investigate how South African children respond to food ads. The findings can be utilised for the specific regions or African countries that have access to these South African TV stations but cannot be extrapolated to countries outside this group unless they have a similar context and TV ad regulations. Lastly, our results could not be compared with earlier findings as these studies used different classification systems. We think that the classification used in the current study can serve as a benchmark for future comparisons.
---
Conclusions
This study suggests a high exposure among children to unhealthy food and beverage advertising including alcohol. The use of cartoons, celebrities, brand benefit claims and health claims were used more often in unhealthy versus healthy food. These techniques may foster children's craving for unhealthy food while making unhealthy food consumption a part of their value pattern. These findings breach the South African Marketing to Children pledge and represent an unsatisfactory outcome of the self-regulation system practiced in South Africa. There is, therefore, an urgent need for a tighter control of the TV food advertising space. Options include statutory regulations and a watershed period for unhealthy food ads.
---
Data Availability Statement: Not applicable
| 25,318 | 1,596 |
27264e694e93c48e320e69b1099401a0ae04e935 | Psychological, social, and welfare interventions for torture survivors: A systematic review and meta-analysis of randomised controlled trials | 2,019 | [
"JournalArticle",
"Review"
] | Torture and other forms of ill treatment have been reported in at least 141 countries, exposing a global crisis. Survivors face multiple physical, psychological, and social difficulties. Psychological consequences for survivors are varied, and evidence on treatment is mixed. We conducted a systematic review and meta-analysis to estimate the benefits and harms of psychological, social, and welfare interventions for torture survivors.We updated a 2014 review with published randomised controlled trials (RCTs) for adult survivors of torture comparing any psychological, social, or welfare intervention against treatment as usual or active control from 1 January 2014 through 22 June 2019. Primary outcome was post-traumatic stress disorder (PTSD) symptoms or caseness, and secondary outcomes were depression symptoms, functioning, quality of life, and adverse effects, after treatment and at follow-up of at least 3 months. Standardised mean differences (SMDs) and odds ratios were estimated using meta-analysis with random effects. The Cochrane tool was used to derive risk of bias. Fifteen RCTs were included, with data from 1,373 participants (589 females and 784 males) in 10 countries (7 trials in Europe, 5 in Asia, and 3 in Africa). No trials of social or welfare interventions were found. Compared to mostly inactive (waiting list) controls, psychological interventions reduced PTSD symptoms by the end of treatment (SMD -0.31, 95% confidence interval [CI] -0.52 to -0.09, p = 0.005), but PTSD symptoms at follow-up were not significantly reduced (SMD -0.34, 95% CI -0.74 to 0.06, p = 0.09). No significant improvement was found for PTSD caseness at the end of treatment, and there was possible worsening at follow-up from one study (n = 28). Interventions showed no benefits for depression symptoms at end of treatment (SMD -0.23, 95% CI -0.50 to 0.03, p = 0.09) or follow-up (SMD -0.23, 95% CI -0.70 to 0.24, p = 0.34). A significant improvement in functioning for psychological interventions compared to control was |
found at end of treatment (SMD -0.38, 95% CI -0.58 to -0.18, p = 0.0002) but not at followup from only one study. No significant improvement emerged for quality of life at end of treatment (SMD 0.38, 95% CI -0.28 to 1.05, p = 0.26) with no data available at follow-up. The main study limitations were the difficulty in this field of being certain of capturing all eligible studies, the lack of modelling of maintenance of treatment gains, and the low precision of most SMDs making findings liable to change with the addition of further studies as they are published.
---
Conclusions
Our findings show evidence that psychological interventions improve PTSD symptoms and functioning at the end of treatment, but it is unknown whether this is maintained at follow-up, with a possible worsening of PTSD caseness at follow-up from one study. Further interventions in this population should address broader psychological needs beyond PTSD while taking into account the effect of multiple daily stressors. Additional studies, including social and welfare interventions, will improve precision of estimates of effect, particularly over the longer term.
---
Author summary
Why was this study done?
• Torture occurs in the majority of countries around the world, often leaving survivors with prolonged physical and psychological problems. We still do not know what treatment for psychological problems is effective.
• This review aimed to calculate the effects of psychological, social, and welfare interventions on the mental health, functioning, and quality of life of torture survivors.
What did the researchers do and find?
• Published data from 15 randomised controlled trials (RCTs)-all of psychological interventions, including 1,373 participants across 10 countries-were systematically reviewed and analysed.
• Compared to control conditions, psychological interventions significantly reduced symptoms of post-traumatic stress disorder (PTSD) and improved functioning at the end of treatment, but not at follow-up.
• Psychological interventions did not significantly improve depression symptoms or quality of life.
• Psychological interventions did not significantly reduce the incidence of PTSD diagnosis, and one study, with 28 participants, showed an increase of PTSD diagnosis at follow-up compared to control conditions.
---
Introduction
Despite 156 countries having signed the United Nations Convention Against Torture and Other Cruel, Inhuman or Degrading Treatment and Punishment [1], torture is widespread, and Amnesty International has documented torture and other forms of ill treatment in 141 countries in 2014 [2]. Long-standing and ongoing armed conflict has likely led to the increased use of torture since. Worldwide, 352,000 fatalities resulting from organised violence were identified between 2014 and 2016 alone [3]. The prevalence of torture and resulting fatalities are likely higher but difficult to estimate given that perpetrators often obscure the use of torture, and there are multiple barriers to disclosure for survivors. Torture has psychological, physical, social, and spiritual impacts that interact in diverse ways. Psychological effects are well documented; predominantly post-traumatic stress, depression, anxiety, and phobias [4,5]. Physical effects are also diverse (for reviews, see [6,7]). In addition, torture survivors' disrupted lives can bring social and financial problems that contribute to and maintain psychological distress, whether as a refugee or in the country of origin [5,8,9].
Torture often occurs against a backdrop of national and international power imbalances, war, civil unrest, and the destruction or erosion of medical and other welfare services. Arguably, treatment needs to incorporate wider conceptualisations of damage and distress than are represented in standard Western psychological treatments for psychological trauma [10,11]. A review conducted in 2011 describes a limited range of interventions for torture survivors, tested in studies with significant limitations such as small sample sizes and unvalidated outcomes [6]. Given the scant literature, greater understanding of what works in treatment and rehabilitation for torture survivors is crucial in order to obtain maximum benefits from scarce resources.
A Cochrane systematic review and meta-analysis [12] aimed to summarise psychological, social, and welfare interventions for torture survivors but found eligible studies only of psychological treatment. The 9 randomised controlled trials (RCTs) included provided data on 507 adults with no immediate benefits for psychological therapy for psychological distress (as measured by depression symptoms), post-traumatic stress disorder (PTSD) symptoms, PTSD caseness, or quality of life. At follow-up, 4 studies with 86 participants showed moderate effect sizes in reducing psychological distress and PTSD symptoms. Conclusions were tentative, given the low quality of evidence, with underpowered studies and outcomes assessed in nonstandard ways, and no study assessed participation in community life or social and family relationships.
More recently, a meta-analysis of 18 pre-post studies of interventions for survivors of mass violence in low-and middle-income countries showed a large improvement in PTSD and depression across treatment [13] but smaller effects from controlled studies. Another recent review [14] concluded that cognitive behavioural therapy (CBT) interventions produced the best treatment outcome for PTSD and/or depression. However, both reviews recruited more widely than torture survivors. No recent systematic reviews or meta-analyses have focused on interventions for torture survivors. We conducted this systematic review and meta-analysis to assess the reported benefits or adverse outcomes in the domains of PTSD symptoms, PTSD caseness, psychological distress, functioning, and quality of life for psychological, social, and welfare interventions for torture survivors.
---
Methods
---
Search strategy and selection criteria
A systematic review was performed using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [15], which is available in S1 PRISMA Checklist. To be included, studies had to be RCTs or quasi-RCTs of psychological, social, or welfare interventions for survivors of torture against any active or inactive comparison condition; the same criteria were used as in the previous review [12], and the full protocol is provided in S1 Text. Quasi-RCTs, in which the method of allocation is known but not strictly random-such as the use of alternation, date of birth, and medical record number [16]-were included considering the difficulties of conducting RCTs in this population.
We extracted RCTs from searches of PsycINFO, MEDLINE, EMBASE, Web of Science, the Cumulative Index to Nursing and Allied Health Literature, Cochrane Central Register of Controlled Trials, the WHO International Clinical Trials Registry Platform, Clinical Trials.Gov, PTSDpubs, and the online library of Danish Institute Against Torture (DIGNITY) databases from 1 January 2014 (1 January 2013 in the case of Web of Science, the Cumulative Index to Nursing and Allied Health Literature, and PTSDpubs) through 22 June 2019 using key search terms including combinations of "torture," "randomised," "trial," and "intervention" with Boolean operators (S1 Text). There was no language restriction. We also searched reference lists of torture-specific reviews published in or after January 2014 and those emerging from the final set of included studies. We contacted corresponding authors when full texts were unavailable.
---
Data extraction
We initially screened titles and abstracts against the inclusion criteria, with the aim of identifying potentially eligible studies for which the full paper was obtained. One author (AH) initially screened titles and abstracts to select full papers; another author (AW) checked a subsample of the excluded papers and agreed with all exclusions. Full papers were screened and selected for inclusion by 2 authors independently and agreed upon after discussion (AH and AW).
Descriptive data, including participant characteristics, treatment mode, and setting, were collected. The primary area of interest for this review was outcomes in the domains of PTSD symptoms and caseness, psychological distress, functioning, and quality of life. PTSD symptoms were defined as the primary outcome given that the majority of identified reviews measured this. Psychological distress was measured as a secondary outcome, in the form of depression symptoms. Depression was chosen to define psychological distress because it is more distinct from PTSD than alternative scaled constructs of psychological distress, particularly anxiety. As in Patel and colleagues' review [12], functioning was measured by engagement in education, training, work, or community activity, and quality of life was defined as a change (positive or negative) in quality of life or well-being as measured by global satisfaction with life and extent of disability.
---
Statistical analyses
Studies in which a psychological, social, or welfare intervention was an active treatment of primary interest were investigated. When studies included more than one arm within a trial, it was decided that-where both arms represented the same content of intervention-data from those arms were combined. The respective control arms associated with these intervention arms were also combined, given that the main area of interest of this research is the impact of intervention relative to control. In studies in which both adjusted and unadjusted treatment effects for specific covariates were reported, the adjusted treatment effects were used.
Due to varying data collection and reporting methods, this review included both continuous and dichotomous scales. Meta-analyses were conducted using Review Manager (RevMan version 5.3) software [16]. It was anticipated that there would be considerable heterogeneity in the data, measured as I 2 , so a random-effects model was applied.
For continuous scales, treatment effects were estimated using standardised mean differences (SMDs). This requires the extraction of mean scores, standard deviations, and sample sizes for each arm. When standard deviations required for the analyses were not available, they were calculated from confidence intervals (CIs), as suggested in the Cochrane handbook [16]. For dichotomous data, treatment effects were estimated using odds ratios by extracting the number of events and sample sizes. All analyses were conducted as planned.
The newly included studies were added to the 9 previous studies in each analysis. Analyses were run for end of treatment and follow-up when available. End of treatment was defined as data collected within 3 months or less from the end of treatment; follow-up was defined as more than 3 months after the end of treatment.
---
Quality of studies
The risks of bias were assessed using the Cochrane guidance [16]. Each study was classified for each of the categories into either low risk, high risk or unclear risk, with justifications. This quality assessment was completed by 2 authors independently (AH and AW), and disagreements were resolved by reference to the data in question. We related the risk of bias categories to the interpretation of effect sizes for the outcomes of studies.
---
Results
From an initial screen of 1,805 abstracts and titles, 6 RCTs since 2014 met our inclusion criteria [17][18][19][20][21][22] and were combined with the 9 RCTs identified in the previous meta-analysis (Fig 1) [23][24][25][26][27][28][29][30][31]. The characteristics of the 15 included studies are summarised in S1 Table . All eligible studies were of psychological interventions. Trials included 1,373 participants at the end of treatment (mean per study = 92) of the 1,585 that started treatment; a mean study completion rate of 86.6% with a range from 50% to 100%. Studies included 589 females and 784 males. Seven trials were conducted in Europe, 5 in Asia, and 3 in Africa. The most commonly used intervention was narrative exposure therapy (4 studies) or testimony therapy (3 studies), both of which draw on creating a testimony of traumatic events. Of the 6 new studies, all provided analysable data after calculating the standard deviation from CIs or standard errors. When neither CIs nor mean scores were available [14,21], the author was contacted, and the mean scores and standard deviations were obtained.
---
Quality of studies
According to Cochrane risk of bias assessment [16], one study had a high risk of bias in random sequence generation, 2 had a high risk of bias in allocation concealment, all 15 had a high risk of performance bias (inevitable in psychological treatment trials), 2 had a high risk of detection bias, 6 had a high risk of attrition bias, and no studies had a high risk of reporting bias. Therapist allegiance, treatment fidelity, therapist qualifications, and other biases were also included. Four studies had a high risk of bias due to therapist allegiance, 2 had a high risk of bias due to therapist fidelity, and 2 had a high risk of bias due to therapist qualifications (Fig 2). Other biases included varying content and length of treatment as judged by therapist according to need, as well as the absence of protocol for adaptation and translation of measures. A full breakdown of the risk of bias in each study is available in S1 Table.
---
PTSD symptoms
Twelve trials, with a total of 1,086 participants, reported data for PTSD symptoms no more than 3 months after the end of treatment [17][18][19][20][21][24][25][26][27][28][29][30][31], using several scales but all based on a similar formulation of PTSD. They were analysed for the effect of psychological intervention on PTSD at end of treatment using SMDs (Fig 3). There was a small to moderate reduction in PTSD symptomatology at the end of treatment (SMD -0.31, 95% CI -0.52 to -0.09, z = 2.79, p = 0.005). Between-study heterogeneity, I 2 , was 55% (95% CI 0.38-0.68), indicating substantial heterogeneity [16]. The confidence in these results is limited overall, as unblinding of assessors may have contributed to detection bias in all but one study [30].
Seven trials, with 569 participants, reported data for PTSD symptoms more than 3 months after the end of treatment [19,[21][22][23][24]26,27]. All used the Harvard Trauma Questionnaire (HTQ) to measure symptoms with the exception of Esala and Taing [19], who used the PTSD Checklist for the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5). They were analysed for the effect of psychological intervention on PTSD at follow-up using SMDs (Fig 3).
There was no difference between the intervention group and the control group (SMD -0.34, 95% CI -0.74 to 0.06, z = 1.68, p = 0.09) in PTSD symptoms at follow-up. Given the large CI, the precision of estimate was low, and all but one study [22] appeared to be underpowered. Heterogeneity was substantial at 66% (95% CI 0.49-0.77).
---
PTSD caseness
Four trials with 82 total participants, classifying participants using caseness as meeting criteria for PTSD no more than 3 months after the end of intervention [21,23,24,30], were analysed for the effect of psychological intervention on PTSD caseness at end of treatment (Fig 4). There was no overall benefit, with an odds ratio of 0.44 (95% CI 0.14-1.31, z = 1.48, p = 0.14). A heterogeneity of I 2 = 0% was noted for this comparison (95% CI 0-0.61), and a number of sources of bias in methodology were observed.
Only one trial compared PTSD caseness in intervention and control groups, at 6-month follow-up for 28 participants [21]. Caseness was significantly higher at 6-month follow-up in the intervention group compared with the control group, with an odds ratio of 7.58 (95% CI 1.2-48, z = 2.15, p = 0.03).
---
Psychological distress
Ten trials reported data for psychological distress, measured as depression, no more than 3 months after the end of treatment, with 988 participants [17][18][19][20][21][22]24,25,27,30]. They were analysed for the effect of psychological intervention on psychological distress at the end of treatment (Fig 5). There was no benefit of treatment over control (SMD -0.23, 95% CI -0.50 to 0.03, z = 1.71, p = 0.09) with a substantial heterogeneity of I 2 = 68% (95% CI 0.56-0.77).
Seven trials reported data for psychological distress, measured as depression using the Hopkins Symptom Checklist-25 (HSCL-25), more than 3 months after the end of treatment, with a total of 569 participants [19,[21][22][23][24]26,27]. They were analysed for the effect of psychological intervention on psychological distress at follow-up using SMDs (Fig 5). There was no benefit of treatment over control for psychological distress at follow-up (SMD -0.23, 95% CI -0.70 to 0.24, z = 0.96, p = 0.34), and heterogeneity was considerable (I 2 = 76%, 95% CI 0.65-0.80).
---
Functioning
Three trials reported data for functioning at the end of treatment, for 584 participants [17,18,21], and were analysed for the effect of psychological intervention on functioning at the end of treatment (Fig 6). There was a moderate benefit of intervention over control for functioning (SMD -0.38, 95% CI -0.58 to -0.18, z = 3.72, p = 0.0002). A heterogeneity of I 2 = 15% was observed (95% CI 0-0.73).
Only one study (28 participants) provided analysable data showing effects at 6-month follow-up [21] and no statistically significant benefits for treatment over control (SMD 0.63, 95% CI -0.13 to 1.40, z = 1.62, p = -0.11).
---
Quality of life
Two trials [20,30], with 36 participants, assessed quality of life after treatment. Their scales were constructed with opposite direction for improvement; the trial by Puvimanasinghe and Price [20] was reversed so that a positive effect size represented improvement. There was no effect of intervention over control on quality of life (SMD 0.38, 95% CI -0.28 to 1.05, z = 1.14, p = 0.26) with a low precision of estimate. No study assessed quality of life at follow-up.
---
Adverse events and dropout
Two studies reported on adverse effects of treatment. Weiss and colleagues [22] reported that one participant attempted suicide after the first therapy session. The authors related this to the participant being related to the therapist and the therapist failing to notify the supervisor due to stigma concerns in the family. Another participant was hospitalised with severe depression and received therapy in the hospital but did not return to the study, and one participant died of a heart attack with no apparent relationship to participation in the study. In Wang and colleagues' [21] study, the intervention group increased in PTSD caseness over follow-up, a statistically significant finding, but the authors were not able to explain this result.
All but 2 trials [23,29] reported dropout during treatment. Of these, 4 reported greater than 20% dropout in the intervention arm [19,24,27,30], and one trial reported a 28% exclusion of participants overall, with no further detail given [28]. Four studies provided detailed reasons for dropout [18,24,27,30].
---
Clinical meaning of changes
Calculation of the SMD assumes that differences in standard deviations among studies reflect differences in assessment scales and not real differences in variability among study populations [16]. We chose Wang and colleagues' study [21] to calculate differences in PTSD symptoms using the HTQ, and in psychological distress (depression) using the HSCL-25. The HTQ uses a 4-point severity response scale. Respondents endorse how much each symptom has bothered them in the past week, as follows: not at all (1), a little bit (2), quite a bit (3), or extremely (4). The total score is the mean of item scores, with 2.5 suggested as the clinical cut-off score, above which a respondent has a high likelihood of PTSD [32]. The small to moderate effect size in reduction of PTSD symptoms for intervention over control represented reduction of the mean pretreatment HTQ score of 2.49 to 2.37 post treatment. That is, participants fell slightly below clinical cut-off both before and after treatment, so the clinical significance of this change is negligible. The HSCL-25 assessed depression, with 1.75 suggested as the clinical cut-off score, with high scores indicating depression. Again relating these scores to the study by Wang and colleagues [21], mean scores at pretreatment assessment (3.02), post-treatment assessment (2.77), and follow-up (2.55) all fell within the clinical range for depression.
---
Discussion
This systematic review and meta-analysis of 15 studies of interventions for torture survivors included 1,373 participants from 10 countries. Six of the 15 studies were published since the previous review, but the sample size increased 3-fold. The range of treatments was somewhat wider, but treatments were still most often compared with inactive controls rather than with other treatment. The problems of torture survivors were largely conceptualised in terms of PTSD symptoms that constituted the focus of treatment and, often, the primary outcome. Meta-analysis demonstrated few benefits of treatment: a statistically significant but clinically small decrease in PTSD symptoms at the end of treatment-from varied psychological interventions compared to mostly inactive controls-not found at follow-up. Other outcomes-PTSD caseness, psychological distress, usually depression and often of clinical severity-were not significantly different either at the end of treatment or at follow-up, with the exception of a worsening of PTSD caseness at follow-up, a poorer outcome than in the previous review [12] and clinically very disappointing. Few studies assessed functioning or quality of life, so results must be interpreted with caution, but they showed no improvement in quality of life and only in functioning, at the end of treatment but not at follow-up.
Outcomes representing broader health and participation in society were neglected, as was the context of social, economic, and political uncertainties survivors face; threats to civil and legal status, accommodation, safety, connections with family and friends, and other assaults on well-being [8,33,34]. Because refugees have a high rate of life events that can facilitate or undermine treatment gains, it would be helpful for studies to monitor these changes across the timescale of treatment and follow-up [35]. It was disappointing to find these shortcomings persisting despite comment in our previous review [12] and in others [36,37].
Although it should be interpreted with caution, the finding of worsening at follow-up in the study by Wang and colleagues [21], using CBT with prolonged exposure, should alert researchers to the importance of studying long-term outcomes and the potentially harmful effects of psychological interventions and other contextual factors post treatment. Furthermore, 4 out of the 15 trials reported over 20% dropout in the intervention arm. It is possible that this is a function of greater social instability of the participant population and understandable preoccupation with meeting basic human needs and rights. However, more investigation of treatment expectations and acceptability is required. Conducting and analysing follow-up interviews, using nonaligned and nonbiased interviewers, would lead towards better understanding of what may work and for whom.
Other reviews of psychological treatments for torture survivors [36,38] or for traumatised refugees [39] have produced more optimistic accounts of benefits of therapy, although they raise similar concerns regarding methodology and cultural appropriateness of interventions. By contrast, Salo and Bray [37] reviewed interventions in relation to what they described (drawing on Bronfenbrenner [40]) as the 'ecological' needs of torture survivors: microsystem life domain, such as family, social, legal, and occupational domains; macrosystem domain, mainly consisting of cultural and language features of the trials; and the chronosystem domain, represented in time of follow-up assessments. They found relatively scant recognition of needs in any of these areas, either in assessment or intervention. This appears to be a very promising framework for reconsidering therapeutic interventions in the field.
Methodological quality of the included studies was largely similar to that in our previous review. Apart from the absence of blinding of therapists or patients to treatment allocation, rarely possible in trials of psychological treatment, bias arose mainly from incomplete reporting of outcomes, dropping noncompleters from outcome analysis, and uncertainty about whether the intended treatment had been delivered as designed, mainly because of lack of therapist qualifications to deliver it. Whether training volunteer therapists, with no existing clinical competences, in the specific therapeutic techniques for the trial is adequate to produce treatment fidelity is an open question and should be addressed within trials. The same comment applies to cultural adaptation of treatment that originated in Western healthcare. Studies gave little detail of what it was they meant by 'cultural adaptation', beyond translation of outcome scales and treatment materials, but effective cultural adaptation involves extensive work between people from all the main cultures represented in a study, who understand the context and content of treatment. Similar methods are required for true validation of translated scales in the languages of the cultures in which they will be used [41]. Even when these procedures are followed, it is by no means clear how a treatment is established as culturally adapted beyond the claims of its authors.
The review has some limitations that potentially affect conclusions. Our search could have been widened by including the grey literature, but a zero yield from around 1,500 chapters, reports, and other articles accessed for the previous review decided us against it. It is possible that in the grey literature, or even in the peer-reviewed literature, our relatively broad search nevertheless missed a trial labelled in a way we did not anticipate, since the nomenclature is not well standardised. While we did not exclude studies in other languages, the majority of the databases searched have shown varied and incomplete coverage of non-English material [42] particularly from low-and middle-income countries [43], indicating a potential database coverage bias. A possible further analysis would have been to fit a model to all effect sizes of each outcome, including time (end of treatment versus follow-up) as a moderator; because we did not, we cannot draw conclusions about maintenance of treatment gains at follow-up. We interpreted our findings according to dichotomous notions of statistical significance and recognise that some overall effect sizes could change (for better or worse) with the addition of one or more studies.
Heterogeneity among studies was substantial and arose from multiple sources: participants, therapists, therapeutic methods, outcomes, delivery, and setting. This produced generally high levels of between-study heterogeneity (I 2 ) that made estimates of effect sensitive to inclusion or exclusion of single studies. Given the weakness and lack of precision of the I 2 statistical test [44], we also calculated the CIs as suggested by Higgins and Thompson [45]. While CIs were generally narrow in cases of high heterogeneity, where low heterogeneity was indicated, I 2 = 0% for PTSD caseness at end of treatment and I 2 = 15% for functioning at end of treatment, wide CIs were produced ranging from 0% to 61% and 73%, respectively, indicating caution in inferring heterogeneity in these cases. We did not anticipate having the power available for subanalyses, but these could be planned in a further update, to investigate each source of heterogeneity. Although widening our scope to refugee studies would have included some family and community interventions, heterogeneity would likely have been even greater, exacerbating problems of interpretation.
Given the complexity of torture survivors' needs and the obstacles they face in reconstructing a meaningful life, the emphasis of interventions on symptoms of PTSD is strikingly narrow, unless reducing or resolving these symptoms is seen as a priority or as the key to other improvements; none of the studies asserted this. It is not even clear that basic security and financial needs are addressed before offering specialised psychological interventions [46]. Thus, integration of interventions addressing the needs and priorities of torture survivors (not assessed or stated) seemed largely lacking. Recruitment into the trial was assumed to mean that survivors' PTSD symptoms were their priority as the target of intervention, though the fact that 4 out of 15 included studies reported greater than 20% dropout in the intervention arm raises questions about the relevance and appropriateness of the interventions for survivors. Furthermore, the development of interventions in terms of cultural and language appropriateness may require more fundamental exploration and questioning of Western models of psychological problems and treatment than was evident in these trials. Recent models of collaborative care [47] go some way towards this but still fall short of the ecological scope described by Salo and Bray [37]. Last, where resources are scarce and far outstripped by needs, as in many low-and middle-income countries, the model of training local volunteers or healthcare staff in Western methods of intervention delivered mostly as individual therapy may mean that interventions are more culturally embedded (depending on what the trainers or study researchers allow by way of adaptation). This, however, needs empirical support, as well as an assessment of the potential harmful impacts on the volunteer therapists and on (other) survivors they work with.
Perhaps it is the limitation of this review to RCTs that means that the newer trials largely resembled the older ones, except in combining a wider range of interventions; there was little evidence of more collaborative and integrated interventions such as those developing for refugee populations [48] or envisaged in a social-ecological framework [37]. It might be that single case methods [49] are more applicable to assessing psychological, social, welfare, and other interventions for the complex and diverse needs of many torture survivors, for whom distress stems not only from the violent and traumatic experiences endured but also from current social, material, and legal conditions [34].
Evaluation of interventions needs to match this breadth of difficulties and at minimum interventions require addressing quality of life and follow-up over realistic time frames. Qualitative studies could helpfully inform more participant-focused assessment of treatment outcome with the addition of observed events such as improved overall health; enrolment in further education, training, or work; and participation in community or society.
In conclusion, all RCTs we found in this systematic review and meta-analysis were of psychological interventions. Small improvements for intervention over control were found for PTSD symptoms and functioning after treatment but not at follow-up, nor was any improvement evident for psychological distress at either time point or for quality of life at the end of treatment. The overall confidence in these results and precision of estimate is still less than satisfactory, and further studies are likely to change the estimates of effect, but the differences between our findings and the impression of treatment effectiveness from narrative reviews are substantial and suggest that more survivor-focused conceptualisation of problems and improved methodology are needed.
---
Data relating to analyses are within the manuscript and Supporting information. Further data relating to methodology can be found in University College London's open access repository at http://discovery.ucl.ac.uk/ 10056876/.
---
Supporting information
| 32,308 | 2,028 |
85b0bdb09fc5fc4318b574fc46d145e23147ef06 | Restructuring personal networks with a Motivational Interviewing social network intervention to assist the transition out of homelessness: A randomized control pilot study | 2,022 | [
"JournalArticle"
] | Social relationships play a key role in both substance use and homelessness. Transitioning out of homelessness often requires reduction in substance use as well as changes in social networks. A social network-based behavior change intervention that targets changes personal social networks may assist the transition out of homelessness. Most behavior change interventions that incorporate social networks assume a static network. However, people experiencing homelessness who transition into housing programs that use a harm reduction approach experience many changes in their social networks during this transition. Changes may include disconnecting from street-based network contacts, re-connecting with former network contacts, and exposure to new network members who actively engage in substance use. An intervention that helps people transitioning out of homelessness make positive alterations to their social networks may compliment traditional harm reduction housing program services.We conducted a pilot randomized controlled trial (RCT) of an innovative Social Network Intervention (MI-SNI), which combines network visualization and Motivational Interviewing to assist adults transitioning out of homelessness. The MI-SNI provides feedback to new residents about their social environments and is designed to motivate residents to make positive changes in both their individual behavior and their personal network. In a sample of 41 adult housing program residents with past year risky substance use, we examined whether participants randomized to receive a MI-SNI showed greater changes in their personal networks over 3 months compared to those receiving usual care. | Introduction
Social relationships play a key role in a variety of public health problems [1][2][3], including alcohol and other drug (AOD) use and homelessness [4][5][6]. AOD use spreads through networks [7] due to a variety of network mechanisms, such as social comparison, social sanctions and rewards, flows of information, support and resources, stress reduction, and socialization [8][9][10]. Homelessness is often precipitated by AOD use problems [11] and continued AOD use among people experiencing homelessness is influenced by continued exposure to AOD use in their social networks [12][13][14][15]. Continued AOD use impedes transitioning out of homelessness and into housing assistance, such as when AOD abstinence is a requirement for housing. Therefore, addressing the interrelated problems of AOD use and homelessness requires a focus on social networks, which play a wide range of positive and negative roles in assisting and impeding the transition out of homelessness [12][13][14][16][17][18][19][20][21][22][23][24][25][26].
Many behavior change interventions informed by social network analysis (SNA) have been developed recently [27][28][29][30], have addressed AOD use in a variety of populations [30], and can potentially address AOD use among people experiencing homelessness. Four styles of incorporating networks into interventions have emerged [27]: 1) identifying groups in a network to target based on structural position ("segmentation"), 2) identifying and intervening with key individuals based on their structural location ("opinion leaders"), 3) activation of new interactions between people without existing ties in a network ("induction"), and 4) changing the existing network ("alteration"). For the most part, network intervention approaches use methods informed by diffusion of innovation theory [31] and aim to maximize the effects of a behavior change intervention through its spread within a well-defined and clearly bounded network (such as students in the same school).
There are challenges in applying diffusion-based SNA behavior interventions to assist people to reduce AOD use while transitioning out of homelessness. The segmentation, opinion leader, and induction approaches are inappropriate because they assume a static, bounded network [27]. However, people transitioning out of homelessness and into housing programs do not belong to a clearly defined network. They can experience heightened social volatility due to loss of contact with people they interacted with on the street, coupled with sudden and ongoing contact with new neighbors. Distancing themselves from AOD using network members may help them decrease AOD use by reducing exposure to high-risk behavior. At the same time, these individuals may have developed strong and supportive ties while living on the street and may have reservations about ending these relationships, even with members of their network who they realize hamper their efforts at positive behavior change and stability. Those who transition into housing programs may experience increased opportunities to develop new pro-social connections and reconnect with positive network ties who can provide key social support necessary to reduce AOD use. On the other hand, transitioning into a housing program that uses a harm reduction model [15,26,32,33] may result in continued exposure to AOD because these programs do not require residents to abstain from AOD use. This social upheaval experienced by individuals transitioning out of homelessness suggests that an AOD reduction behavior change intervention informed by SNA that assumes a static and bounded network is inappropriate. Network "alteration" intervention approaches, on the other hand, do not make this assumption and appear to be a better fit for addressing the social volatility associated with transitioning out of homelessness.
Our team recently developed a Motivational Interviewing Social Network Intervention (MI-SNI) designed to reduce AOD use among adults with past year problematic AOD use who recently transitioned from being homelessness to residing in a housing program [34][35][36]. The MI-SNI targets alterations of the "personal" networks of independently sampled individuals, rather than individuals who are members of a static, bounded network [37][38][39]). This approach is appropriate for people transitioning out of homelessness because each person experiencing this transition is at the center of a unique and evolving group of interconnected people who play a variety of roles in assisting or hampering their transition. The MI-SNI combines visualizations of personal network data with Motivational Interviewing (MI), an evidence-based style of intervention delivery that triggers behavior change through increased self-determination and self-efficacy while reducing psychological reactance [40,41]. Results from a pilot randomized controlled trial of the MI-SNI on AOD-related outcomes found that formerly homeless adults who recently transitioned to a housing program and received the intervention experienced reductions in AOD use, and increased AOD readiness to change and abstinence self-efficacy, compared to those who were randomly assigned to the control condition [34]. Examining whether the MI-SNI is associated with actual changes to participants' social networks, a hypothesized mechanism through which it is expected to affect AOD-related outcomes, is an important next step in this line of research.
The present study compares personal network composition and structure data collected before and after the intervention period to explore if the MI-SNI was associated with longitudinal changes in personal networks of MI-SNI intervention participants compared to participants who received usual case management services. This study provides a preliminary test of several hypotheses. Our primary hypothesis was that the intervention would be associated with a change in network composition, primarily a decrease in number network members who influence the participants' AOD use, such as those who are drinking or drug use partners. We also hypothesized that receiving visualization feedback that highlighted supportive network members would prompt participants to take steps to retain supportive network members, drop unsupportive members of their networks, and add new network members who provide support, leading to an overall increase in supportive ties. For alters who remained in the network after the intervention period, we hypothesized that MI-SNI recipients would be more likely to change their relationships with these network members, resulting in fewer AOD risk behaviors with them. Finally, we tested an exploratory hypothesis that MI-SNI recipients would make changes to their networks that would result in them having significantly different overall network structures (size and connectivity among network members) and more network member turn-over between waves. Finding such intervention effects on network structure and turn-over would suggest that the MI-SNI influenced how participants interacted with their social environments during the intervention period.
---
Material and methods
---
Intervention design, setting, and participants
The complete and detailed plan for the conduct and analysis of this Stage 1a-1b randomized controlled trial (RCT) is available elsewhere [36] and the clinical trial has been registered (ClinicalTrials.gov Identifier: NCT02140359). Detailed descriptions of the development and beta testing of the Stage 1a computer interface, feasibility tests of the intervention procedures, pilot test participant characteristics, and initial pilot test results are also available elsewhere [35,36]. Participants were new residents of a housing program for adults transitioning out of homelessness in Los Angeles County recruited between May 2015 and August 2016. The primary analytic sample comes from the initial pilot test site, Skid Row Housing Trust (SRHT), which provides Permanent Supportive Housing (PSH) [42][43][44][45][46][47] services in Skid Row, Los Angeles. PSH programs do not require AOD abstinence or treatment, but do provide case management and other supportive services such as mental health and substance abuse treatment. SRHT residents and staff participated in project planning and beta testing prior to recruitment [35,36]. The intervention procedures were designed to be delivered during typical case manager sessions with new residents to supplement and improve the support they provide residents by raising both the case manager's and resident's awareness of the role that the resident's social environment plays in the transition out of homelessness. Beginning in February 2016, an additional supplemental sample was recruited from SRO Housing Corporation (SRO Housing), which is a similar housing program also located in Skid Row. This additional recruitment was in response to slower than expected monthly recruitment rates from SRHT and a projected shortfall in our targeted recruitment sample size of 15-30 subjects per intervention arm, which is a rule-of-thumb recommendation of the National Institutes of Drug Abuse for funding Stage 1b Pilot Trials [48].
Residents were recruited through SRHT and SRO Housing leasing offices prior to receiving the assignment of a housing unit. Eligible participants were > = 18 years old English speakers who had been housed within the past month and were screened for past-year harmful alcohol use (Alcohol Use Disorders Identification Test (AUDIT-C) score > = 4 for men and > = 3 for women) [49] or drug use (Drug Abuse Screen Test (DAST) score greater than 2) [50][51][52]. The 149 residents who were contacted by the research team resulted in a 49 eligible residents who were randomized into the intervention arm (N = 25) or the control arm (N = 24) using a permuted block randomization strategy stratified by gender. Full recruitment details and results are provided in the Fig 1 CONSORT diagram along with a CONSORT checklist in S1 Appendix. Eligible residents were informed of their rights as research participants and provided written consent. Retention in the study was excellent, with 84% of participants (n = 21 intervention, n = 20 control) completing the follow-up assessment three months later. Participants averaged 48 years of age, were primarily male (80%), African American (56%), had a high school education or less (68%), were never married (66%), had children (59%), and received an average of $471 in monthly income. Full details about participant demographics and AOD use is available elsewhere [34]. All procedures were approved by the authors' Institutional Review Board (IRB) (Study ID: 2013-0373-CR02) and the complete and detailed plan for the conduct and analysis of the trial that was approved by the IRB before the trial began is available in S2 Appendix. A Federal Certificate of Confidentiality was obtained for this study, which provided additional privacy protection from legal requests.
---
Baseline and follow-up data collection procedures
The purpose of the baseline and follow-up network data collection assessments was to measure participants' personal network characteristics when they first moved into their supportive housing unit and 3 months later to provide measures of network change and test if those who were offered the intervention experienced significantly different network changes compared to those randomly assigned to the control condition. Personal network assessment interviews were conducted through one-on-one, in-person interviews (~45-60 minutes) by independent data collectors who did not have access to the assignment of IDs to study arm and were therefore blind to study condition. Interviews were conducted using the social network data collection software EgoWeb 2.0 (egoweb.info) installed on a laptop computer. Participants were paid $30 to complete the baseline interview and $40 for the follow-up.
We followed common procedures for collecting personal network data [37] used in previous studies of AOD use and risky sex among homeless populations [12][13][14][53][54][55][56][57]. Respondents (referred to as the "egos" in personal network interviews) were first asked questions about themselves, including demographic questions (baseline only) and a series of questions about their own AOD use. After these questions, the egos were asked the following standard question prompting them to name up to 20 people in their network (referred to as a "name generator" question):
"Now I'd like to ask you some questions about the people that you know. First, I'd like for you to name 20 adults, over 18 years old, that have been involved in your life over the past year. We do not want their full names-you can use their first names, initials or descriptions. These should be people you have had contact with sometime in the past year-either face-toface, by phone, mail, e-mail, text messaging, or online. Start by naming the people who have been the most significant to your life-either in a positive way or a negative way. You can decide for yourself who has been significant, but consider those who have had a significant emotional, social, financial, or any other influential impact on your life. We'll work outwards toward people who have less significance. You can name any adults you have interacted with no matter who they are or where they live or how much time you have spent with them."
Once each ego provided a list of names (referred to as network "alters"), they were asked a series of questions about each person (referred to as "name interpreter" questions). Respondents were also asked, for each unique alter-alter tie, if these two people knew each other. These personal network questions were asked at both baseline and follow-up and these responses provided raw data for measurements of change in personal networks.
---
Intervention procedures
Residents who completed baseline interviews were randomly assigned to either the intervention or control arms. Those assigned to the intervention arm were offered four biweekly inperson sessions with a MI-trained facilitator. Full details about the intervention delivery, including examples of the visualizations presented to participants during the session, are available elsewhere [34]. Briefly, facilitators conducted a brief personal network interview (~15 minutes) focusing on recent network interactions (past 2 weeks). Name generator and name interpreter question wording were selected to generate a series of visualizations of the resident's recent interactions with their immediate social network. These visualizations highlighted different aspects of the network (network centrality, AOD use, social support) and were used to guide a conversation about the participant's social network in a MI session that immediately followed the personal network interview.
---
Measures
Network outcomes: AOD use/influence. We constructed four types of network AOD use/influence measures from three name interpreter questions. Participants identified which alters they drank alcohol with and whether they engaged in this behavior over the past 4 weeks. Based on this question, alters were categorized as a drinking partner and a recent drinking partner. A similar question was asked about other drug use with each alter, which was used to classify alters as a drug use partner and recent drug use partner. Participants were also asked if they drank more alcohol or used more drugs than usual when they were with the alter and if this happened recently. This question was used to classify each alter as an AOD use influence alter and a recent AOD use influence alter. These variables were combined to produce overall any risk or any recent risk dichotomous variables if any of the above variables was true. For each of these dichotomous variables, an overall network proportion variable was constructed by summing the number of alters with the characteristic and dividing by the total number of alters in the ego's personal network.
Network outcomes: Social support. We constructed four types of network social support variables from 3 name interpreter questions. Respondents were asked if they received three different types of support from each alter: emotional support (e.g. encouragement), information support (e.g. advice), and tangible support (e.g. money, transportation, food) and if this support happened in the past 4 weeks. Alters were classified as having given each of these types of support both ever and recently. Also, alters who provided at least one of these types of support were classified as any support and any recent support. For each alter social support variable, an overall network support proportion variable was constructed by summing the number of alters with the support characteristic and dividing by the total number of alters in the ego's personal network.
AOD risk relationship change outcomes. We constructed four types of AOD risk relationship measures to test if the intervention was associated with egos changing their AOD related behavior with alters who remained in their networks (in contrast to alters who were removed or added to their networks across assessments). To construct these measures, we first identified which network members were named at both assessments by matching the names listed at the baseline and follow-up interview for each respondent to identify unique alters. Next, we compared the responses about retained alters' AOD risk at the baseline and followup assessments to identify those who changed status as drinking partners, drug use partners, AOD use influence partners, or any AOD risk partners. For each of these four types of status changes, we constructed: (a) stopping measures to indicate alters who had the characteristic at baseline, but not have it at follow-up; and (b) starting measures to indicate alters who did not have the characteristic at baseline, but had it at follow-up. We constructed overall measures of each of these variables for each ego by counting the number of alters who had the relationship change characteristic.
Network structure and network member turnover outcomes. We constructed measures of overall cross-wave network structure to explore associations with overall network size and interconnectivity intervention status. Matching alter names across waves enabled construction of a cross-wave network that included all alters named at either wave. Next, based on these cross-wave networks, we constructed common measures of personal network structure [38] including a measure of network size (i.e., total unique alters named), and two measures of network connectivity: cross-wave density (the ratio of existing ties between network members to the total possible number of ties) and cross-wave components (number of groups of network members with no connections to other members of the network). To measure network turnover, alters who were named in only one wave were classified as either dropped alters (baseline only), added alters (follow-up only), or retained alters (named in both waves). For each respondent, we constructed counts of dropped, added, or retained alters in the crosswave network.
Background variables. Demographic and AOD use variables were used to inform the construction of model weights to adjust for participants who did not complete both assessments. Demographic variables captured in the baseline assessment included age, gender, race/ ethnicity, education, number of children, marital status, and income. AOD use variables included the quantity and frequency of alcohol use and days using marijuana in the past 4 weeks and an assessment of readiness to change AOD use [58]. Also included in the construction of weights were variables assessing housing program (SRHT vs. SRO) and intervention arm.
---
Analyses
The primary goal of the current study is to provide preliminary empirical evidence of the intervention's effect on changes to network composition and structure for intervention recipients compared to participants assigned to the control condition. The results presented here were generated with the same analytic approach as a previous study that found an association between receipt of the intervention and changes in participants' AOD behavior and attitudes [34]. We used an intent-to-treat [59] approach by offering follow-up to all participants and analyzing their data to reduce type I errors [60]. We constructed and tested a series of regression models with each AOD use/influence proportion and each social support proportion from the follow-up assessment as the dependent variable with the intervention group indicator as the predictor variable, while controlling for the baseline measure of the dependent variable. We also constructed regression models with each of the AOD risk relationship count variables and each network structure and turnover variable with intervention group as predictor variable while controlling for network size at baseline. We used linear regression for continuous outcomes and Poisson regression for count outcomes.
The models were fitted using the "survey" package in R version 3.3.1 to include nonresponse weights. These weights enabled computation of accurate standard errors and accounting for the potential bias caused by unit non-response missing data [61] due to participants skipping the follow-up assessment or dropping out of the study. Of the 49 eligible study participants who completed a baseline assessment, 41 also completed the 3-month follow-up assessment and responders differed from non-responders on a few characteristics, such as income and whether they were housed in SRHT or SRO Housing. The nonresponse weights were estimated using a non-parametric regression technique, called boosting [62,63], instead of logistic regression, as implemented in the TWANG R package [64] and including baseline outcome and demographic variables in the model. We calculated effect sizes based on parameter estimates from the regressions and pooled standard deviation at baseline to calculate Cohen's d [65,66].
For each model, we conducted two stages of analysis, similar to our previous approach [34]. First, we analyzed data from our primary sample, the 28 participants from SRHT only, because the MI-SNI was developed for SRHT residents with input from SRHT staff and SRHT case management process. Second, we conducted a secondary analysis on the full sample of 41 that included the small number of residents from a different housing program (SRO Housing). Additional details about the justification of this two stage approach, including details about the differences between the participant samples, is available elsewhere [34].
---
Results
---
Network composition
Table 1a presents descriptive statistics (i.e., mean and standard deviations) for the baseline and follow-up network proportion measures for the SRHT intervention and control group participants. In addition, the table presents results from the regression models for the SRHT sample analysis predicting the intervention effect on proportions of types of network members at follow-up controlling for the same baseline network composition measures. Each row presents the results of one model. Table 1b presents these same findings for the full sample. The intervention effect was significant at the 95% confidence level with a large effect size for the proportion of drinking partners in the network at follow-up for the SRHT residents only. On average, intervention recipients had 13% fewer recent drinking partners in their networks at follow-up compared to participants in the control arm, controlling for baseline personal network composition (p = .042, d = .81). The average change in proportion of recent drinking partners in the overall sample was not significantly different between intervention and control recipients (p = .145). There was also a significant decrease in proportion of alters with "any risk" for the SRHT residents only. The average 13% decrease in the SRHT-only sample for intervention participants compared to control participants was marginally significant with a medium to large effect size (p = .063, d = .74). The model for the full sample did not reach significance (p = .21).
Table 2a (SRHT only) and 2b (full sample) provide descriptive statistics for counts of alters who changed their AOD use and risk influence relationship status with egos between baseline and follow-up assessments and results of models testing if intervention status was significantly associated with these counts. Each model with count outcomes controlled for size of the network at baseline (number of alters named). The tables also present results of exploratory tests of the intervention effects on network structure and turnover. Each model estimate and 95% CIs were converted to incident rate ratios (IRR) (excluding network density) because model estimates of count outcomes can be easily interpreted as predicted % increase or decrease [67].
Models testing for associations between intervention arm and counts of changing relationships identified several medium-sized effects for the full sample. First, intervention participants had an average of 2.68 times more retained alters who stopped being drinking partners (i.e., the respondent reported as a drinking partner at baseline but not at follow-up) compared to control participants (p = .03, d = .61). Second, when considering those who influenced AOD use with respondents at follow-up but not baseline (i.e., classified as starting AOD use influence), intervention participants had only 13% the number of these retained alters in their networks compared to similar control participants (p = .02, d = .59). Third, intervention recipients had an average of 42% fewer retained alters who changed from not being rated as having any of the three risk characteristics at baseline to having at least one at follow-up compared to control participants (p = .05, d = .52). These associations were not significant within the SRHT-only sample (see Table 2a) except for a marginally significant decrease in alters who started influencing AOD use between waves: SRHT intervention participants averaged only 10% the number of these retained alters in their networks compared to similar control participants (p = .07, d = .49).
---
Network structure and turn-over
For overall network structure, several significant effects of medium-to-large magnitude were found. Average cross-wave network density was 0.18 higher for intervention participants compared to control participants in the SRHT-only sample (p = .02, d = .82), although this association was not significant in the full sample (p = .14). For cross-wave network number of components, intervention networks had on average 55% the number of components as the control arm for the full sample (p = .02, d = .74) and 42% for the SRHT-only sample (p < .01, d = 1.01). Overall network size did not significantly differ between treatment conditions. .37 1 Baseline and Follow-up means and SDs weighted from full intent-to-treat sample (N = 49) to account for non-response at follow-up. 2 Weighted intervention effect estimates and 95% Confidence Intervals from linear regression models predicting follow-up measure controlling for baseline. 3 Cohen's d effect sizes interpreted as small (.20), medium (.50), and large (.80).
https://doi.org/10.1371/journal.pone.0262210.t001
However, the average number of alters dropped from the network between baseline and follow-up was marginally lower for intervention participants compared to control participants in the SRHT-only sample, with a small effect size (p = .10, d = .39), but this association was nonsignificant in the full sample (p = .14). The average number of alters retained in the network between baseline and follow-up was marginally higher for intervention participants than control participants in the SRHT-only sample, with a medium to large effect size (p = .07, d = .71),
.05 (.18, .99) .52
---
Cross-wave network structure
Total Unique Alters Named 24.64 (9.38) 23.93 (8.00) .
.08
.03
Cross-Wave Density
.59 1 Baseline and Follow-up means and SDs weighted from full intent-to-treat sample (N = 49) to account for non-response at follow-up. 2 Estimates and 95% CI reported for alter count and number of components outcomes are converted to IRR to aid interpretation for non-linear models. Density estimates presented are linear model estimates. 3 Cohen's d effect sizes interpreted as small (.20), medium (.50), and large (.80).
https://doi.org/10.1371/journal.pone.0262210.t002
but non-significant in the full sample (p = .12). The number of new alters added to the network between baseline and follow-up did not significantly differ across treatment conditions.
---
Discussion
The goal of this project was to conduct a pilot evaluation of an innovative MI-SNI using exploratory analyses to determine if the intervention was associated with changes in personal network composition and structure. Building off of previous results that demonstrated promising changes to participants' AOD use, readiness to change, and abstinence self-efficacy [34], the results presented here also demonstrate significant associations between participation in the intervention and changes in network characteristics. These findings suggest that the MI-SNI may help individuals experiencing homelessness and risky AOD use positively restructure their social networks while transitioning into supportive housing.
In terms of network composition, we found evidence from the SRHT sample that intervention participants had smaller proportions of risky network members from baseline to followup, namely drinking partners and network members who had any risk influence, compared to participants in the control condition. However, contrary to our expectations, we did not find any significant intervention effect on changes in the overall proportion of supportive network members. Another important finding is that intervention participants experienced more positive changes in their relationships with retained alters compared to control participants. For example, compared to control participants, those who received the intervention had a greater number of ties to alters with whom they had a drinking relationship at baseline but did not drink with in the two weeks prior to the follow-up assessment. Intervention participants also had fewer ties to alters who were rated as not being influential over their AOD use at baseline but were rated as having AOD risk characteristics at follow-up. Finally, when examining network turnover, we found that SRHT intervention participants had fewer dropped alters and more retained alters between the baseline and follow-up assessments compared to control participants, resulting in significantly denser networks with fewer components among intervention participants. The full sample analysis showed a similar result for change in components. Therefore, these results demonstrated that the MI-SNI recipients had significantly higher retention of members of their existing networks over the 3 months between assessments compared to participants in the control arm.
These findings provide preliminary evidence that intervention recipients were more likely to positively adjust their relationships with network ties they retained over the first three months after transitioning into housing compared to those who received usual case management. The findings suggest that presenting a series of network visualizations that highlighted network centrality, AOD risk, and social support may have helped MI-SNI recipients recognize both the potential for AOD risk in their networks and the network strengths that were worthy of maintaining. Although the intervention was not associated with increased social support, those who received the intervention had greater network stability and did not differ significantly in their network social support compared to those in the control condition, while reducing their overall AOD network risk overall and within retained relationships. Taken together, these findings suggest that the intervention may have triggered recipients to adjust their relationships strategically. For example, participants may have increased their awareness of risky network members, but instead of dropping them from their network, participants may have identified ways to avoid risky interactions when with these members. It is possible that combining personal network visualizations with Motivational Interviewing triggered intervention recipients to articulate active steps they could take to minimize exposure to AOD influence from network members they did not want or were not able to completely avoid. It is possible that the MI-SNI triggered network specific "change talk" that led to behavior changes in their interactions with their network 41].
---
Limitations
Although this study provides some promising results that this innovative MI-SNI design coupling Motivational Interviewing and personal network visualizations can help restructure networks in positive ways, there are several limitations worth noting. First, while our sample size is appropriate for an exploratory, small pilot study of a novel intervention approach [48], it was too small to control for factors that may have influenced the results. Also, the large number of exploratory tests run in this study is appropriate for Stage 1 behavioral therapy research development, but may have produced significant findings due to chance. Our predominantly male sample drawn from only 2 housing providers limits generalizability to other housing programs in other geographic regions with different demographic characteristics. A limitation to our tests of network change is that we were only able to collect network data immediately after the intervention period and we have no assessment of the longer-term impact of the intervention on the networks of participants. This study also relied on self-reports of network characteristics at baseline and follow-up. Due to the high respondent burden of completing personal network interviews [68,69], we had to limit our standard questions to only a few relationship characteristics. There are likely many other important relationship qualities that may be impacted by the MI-SNI intervention that we did not measure. As in other AOD use interventions, social desirability may have impacted the self-reported network AOD use outcomes, particularly for those who were invited to receive the intervention sessions and discussed their networks with MI facilitators. However, the findings showing network changes are consistent with individual-level AOD use change outcomes [34] and self-reports by egos of their alters' AOD use using a personal network approach has been found to be accurate when compared to alter self-reports [70].
Another important limitation of this study is the mixture of results that were significant for our primary sample of residents of SRHT only, the original program that contributed to the design of the intervention, and results that were significant for models based on the entire sample. These mixed findings are similar to the results of the analysis of individual level changes in AOD-related outcomes for MI-SNI recipients compared to control participants [34]. These mixed findings make it difficult to draw conclusions because there were too few SRO Housing residents (n = 13) to conduct a sub-sample analysis. Different housing programs that follow a harm reduction model operate in different ways [71] and it is possible that differences in how these two programs provide services and case management to residents impacted these mixed findings. Because of these limitations, many of the results of this exploratory analyses are preliminary and will require a larger RCT to fully test the intervention impact.
---
Conclusions
Despite these limitations, these results met our initial objective to conduct a pilot test of a novel personal network-based intervention approach. The findings suggest enough promise to justify a larger RCT that enables more robust tests of hypotheses. These results provide some evidence that the intervention had an impact on intervention recipients that went beyond changes to their own personal AOD risk behavior. We believe that the findings of this pilot test suggest that coupling MI with visualizations of personal network diagrams that highlight AOD risk and support characteristics may help residents who have recently transitioned to housing to take steps to change their immediate social environment to achieve AOD use reduction goals. These findings suggest that the intervention may have prompted actions by participants to reduce the prominence of network members who had the potential to influence their own AOD risk.
In addition to conducting a larger RCT to provide sufficient power to control for potential confounding factors, such as demographics or housing program characteristics, we recommend that future studies of this approach include a complimentary, qualitative investigation of the network change process for MI-SNI recipients compared to control participants to better understand how the intervention triggers a pattern of choices regarding which network members to retain, which to drop, and the development of relationship change strategies. This would possibly shed light on the mechanisms of network change that are triggered by coupling MI with visualizations of personal networks and key relationship characteristics related to beneficial network reconfiguration. The development of the MI-SNI and interpretation of these RCT results benefitted from qualitative data collected during beta tests of the MI-SNI interface [35] as well as other studies of formerly homeless people in substance abuse recovery [19]. Continued collection of qualitative data can provide context to better understand how people actively modify their networks to achieve behavior change outcomes.
A better understanding of the context of network change would also help assist the selection and construction of personal network measures to track changes for both control and intervention participants. We have presented one approach to measuring personal network change that met the goals of this small sample pilot test. A larger sample would enable other analytic approaches for measuring personal network change [37,39,72], including multilevel models that can test for participant-alter relationship outcomes controlling for participant, alter, and personal network characteristics while accounting for non-independence of ego-alter observations [53][54][55]73,74]. Although most examples of SNA informed behavior change interventions use a personal network approach, few have been rigorously tested with RCTs and longitudinal network data [30]. Therefore, this is clearly a developing field and in need of more examples to help identify best practices for measuring and testing network change. Another modification of the design used in this pilot test would be to have residents' case managers deliver the MI-SNI rather than external intervention facilitators. The visualizations resulting from the personal network interviews may help case managers understand the starting point of new residents' social environment as they transition out of homelessness and may improve their ability to understand their social challenges and recommend appropriate services.
People transitioning away from homelessness and attempting to reduce their AOD use appear to recognize the importance of the social environment in their continued AOD use. The MI-SNI may be a tool that provides them with an easy to understand personal overview of their current social environment. The four sessions that MI-SNI recipients were invited to receive may trigger them to take preliminary steps towards changing aspects of their networks while seeing tangible evidence of how these efforts impacted their networks. This progress towards social network change may encourage changes the participants' own AOD use behavior. Therefore, changing social networks may make achieving change in AOD use more attainable and may lead to greater AOD use outcomes over time. These preliminary findings suggest the need for a larger trial with a longer follow-up. Although the MI-SNI was customized for new residents of a harm reduction housing program, the results of this pilot test can also serve as promising results this intervention approach could have impact beyond the housing context. The MI-SNI intervention approach can be adapted for other populations (e.g., adolescents) and other health outcomes where social networks are influential (e.g., smoking).
---
All data file are available in Github repository: https://github.com/ NCT02140359. qualintitative/EgoWeb-Project-Data/tree/main/ PONE-D-19-36073R1.
---
Supporting information
---
S1 Appendix. CONSORT checklist. (PDF)
S2 Appendix. Study protocol. This document includes exact text describing the RCT procedures approved by the author's IRB prior to the trial beginning. The document includes both the original study plan, human subjects protection plan, and data safeguarding plan provided to the IRB in the initial ethics application as well as the final text uploaded into the human subjects review system which was discussed and approved in a full committee meeting prior to the trial starting. (PDF)
---
Author Contributions
Conceptualization: David P. Kennedy | 41,571 | 1,676 |
540848b3d651065a53b1bb4fecbd0a906d13d434 | Supporting the Sexual Rights of Women Living With HIV: A Critical Analysis of Sexual Satisfaction and Pleasure Across Five Relationship Types | 2,018 | [
"JournalArticle",
"Review"
] | In the context of HIV, a focus on protecting others has overridden concern about women's own sexual wellbeing. Drawing on feminist theories, we measured sexual satisfaction and pleasure across five relationship types among women living with HIV in Canada. Of the 1,230 women surveyed, 38.1% were completely or very satisfied with their sexual life, while 31.0% and 30.9% were reasonably or not very/not at all satisfied, respectively. Among those reporting recent sexual experiences (n=675), 41.3% always felt pleasure, with the rest reporting usually/sometimes (38.7%) or seldom/not at all (20.0%). Sex did not equate with satisfaction or pleasure, as some women were completely satisfied without sex while others were having sex without reporting pleasure. After adjusting for confounding factors, such as education, violence, depression, sex work, antiretroviral therapy, and provider discussions about transmission risk, women in long-term/happy relationships (characterized by higher levels of love, greater physical and emotional intimacy, more equitable relationship power, and mainly HIV-negative partners) had increased odds of sexual satisfaction and pleasure relative to women in all other relational contexts. Those in relationships without sex also reported higher satisfaction ratings than women in some sexual relationships. Findings put focus on women's rights, which are critical to overall well-being.. | INTRODUCTION
Sexual wellbeing is a human right. The Declaration of Sexual Rights, endorsed by the World Association for Sexual Health (2014), states "the following sexual rights must be recognized, promoted, respected, and defended" regardless of age, race, sexual orientation, health status, social and economic situation, and so forth: the right to sexual autonomy (including choices about one's body, sexual behaviours, and relationships), the right to sexual freedom (including both the freedom to sexual expression and freedom from all forms of violence, stigma, and oppression), and the right to pleasurable, satisfying, and safe sexual experiences, which can be an important source of overall health and wellbeing. These rights, however, often go unacknowledged and unsupported in research, policy, and discourse regarding the sexuality and sexual health of women living with HIV (Carter, Greene, et al., 2017).
For decades, sex in the context of HIV has been synonymous with danger, resulting in a lack of pleasure in discussions and programs about women and HIV (Higgins & Hirsch, 2007;Higgins, Hoffman, & Dworkin, 2010). This narrative, combined with gendered cultural norms, have produced expectations that women living with HIV ought not to have sex, or, if they must, then need to do so safely, with no acknowledgment of the satisfaction, pleasure, or other benefits that women may be deriving from sex (Gurevich, Mathieson, Bower, & Dhayanandhan, 2007;Lawless, Crawford, Kippax, & Spongberg, 1996). Importantly, however, women living with HIV have, for many years, fought back against these negative sexual scripts. From Mariana Iacono's (2016) tips on how to go down on a woman living with HIV, to queer artist-activist Jessica Whitbread's (2011;2016) "Fuck Positive Women" poster and "I Don't Need a Space Suit to Fuck You" retro lesbian sci-fi fantasia, to the policy statement of the International Community of Women Living with HIV/AIDS (2015) opposing laws that criminalize intimacy between Running head: PLEASURE AND SATISFACTION AMONG WOMEN WITH HIV 3 consenting adults, women living with HIV have been at the forefront of efforts to end sexual oppression and promote sexual liberation for themselves and their communities. This kind of sex-positive feminist dialogue is largely absent from HIV research, as most studies concerning HIV-positive women's sexual health continue to focus on others' sexual health. The emphasis on HIV prevention is evident in the large literature on: safer sex, which has primarily interrogated (male) condom use practices (Carvalho et al., 2011); safer conception (Matthews et al., 2017) and prevention of vertical transmission (Ambia & Mandala, 2016); and more recently, treatment-driven prevention strategies, for which the latest science shows that people who are adherent to combination antiretroviral therapy (cART) and achieve and maintain an undetectable viral load (VL) have effectively no risk of sexually transmitting the virus to HIV-negative partners (Rodger et al., 2016). While important inequities in treatment access and adherence exist owing to a myriad of social factors (e.g., substance use, violence, poverty) (Carter, Roth, et al., 2017), researchers are beginning to theorize that this biomedical science may have the unintended good consequence of freeing people living with HIV from repressive discourses of sexual risk and opening up new possibilities for sexual pleasure (Persson, 2016).
To draw attention to the need for research, policy, and discourse to support the sexual rights of women living with HIV, as set forth in the Declaration, the purpose of this study was to explore sexual satisfaction and pleasure among women living with HIV in Canada. Consistent with critical feminist theory (Carter, Greene, et al., 2017), we were concerned with how these experiences relate to issues of power, looking specifically at women's intimate relationships and the larger social realities in which women enact their sexual lives. By studying positive aspects of sexuality, and understanding the relational and social conditions under which women are most and least likely to enjoy them, we aim to shift the focus in HIV to women's rights and help Running head: PLEASURE AND SATISFACTION AMONG WOMEN WITH HIV 4 change the dominant narrative from risk to pleasure.
---
Definitions and conceptual underpinnings
Sexual satisfaction and pleasure.
Sexual satisfaction is often defined with regard to positive emotions. For example, Sprecher and Cate (2004) conceptualized it as "the degree to which an individual is satisfied or happy with the sexual aspect of his or her relationship" (p. 236). Early theories of sexual satisfaction stem mainly from social exchange models that posit that feeling sexually satisfied (or sexually unsatisfied) arises from a perceived balance between the presence of sexual rewards (e.g., joy, pleasure) and absence of sexual costs (e.g., anxiety, inhibition) as exchanged between partners (Byers, Demmons, & Lawrance, 1998). These descriptions, however, focus on satisfaction within relationships, while others have measured satisfaction in relation to how happy one is with one's sexual life more broadly (Bridges, Lease, & Ellison, 2004). The Sexual Satisfaction Scale -Women's version (SSS-W) was developed to capture both the relational and personal dimensions of this concept (Meston & Trapnell, 2005), and several other new scales assessing sexual satisfaction have been developed and recently reviewed by Mark, Herbenick, Fortenberry, Sanders, and Reece (2014).
Although sexual pleasure plays an important role in satisfaction (Pascoal, Narciso, & Pereira, 2014), it also has distinct meanings. Broadly defined, Abramson and Pinkerton (2002) described sexual pleasure as the "positively valued feelings induced by sexual stimuli" (p. 8).
Other definitions emphasize both physical and emotional sensations arising from intimate touch of the genitals or other erogenous zones, such as breasts and thighs (De la Garza-Mercer, 2007).
Yet sex and sexual gratification can also encompass broader experiences such as kissing, hugging, or fantasizing (Fahs & McClelland, 2016), which women living with HIV themselves report are important aspects of a pleasurable sexual life (Taylor et al., 2016).
---
Subjectivity, Agency, and Entitlement.
Cutting across these literatures is the notion that sexual satisfaction and pleasure are subjective experiences. Indeed, when people are asked to reflect on these concepts, the individual and dyadic factors they describe as contributing to sexual fulfillment and enjoyment are highly diverse and personal in nature. Yet sexuality is also political, and is "moderated by and unfolds within a particular and cultural milieu" (Abramson & Pinkerton, 2002, p. 10). A key feature, then, of critical sexuality research is attention to the ways in which disparate socio-political conditions may shape not only how women experience but also how they evaluate their sexual lives within specific social contexts (Fahs, 2014;Fahs & McClelland, 2016;McClelland, 2011McClelland, , 2013)).
Feminist scholars have taken up this cause in recent studies by theorizing outcomes in relation to sexual agency and entitlement. Agency has been defined as "the ability of individuals to act according to their own wishes and have control of their sexual lives" (including the choice to have or not have sex) (Fahs & McClelland, 2016, p. 396). In empirical research on the subject, higher agency has been associated with greater sexual satisfaction and excitement (Fetterolf & Sanchez, 2015;Kiefer & Sanchez, 2007;Laan & Rellini, 2011;Sanchez, Kiefer, & Ybarra, 2006), while lower agency has been linked to a reduced likelihood of declining unwanted sex (Bay-Cheng & Eliseo-Arras, 2008) and feeling pleasure (Sanchez, Crocker, & Boike, 2005).
Beyond deciding to have sex and pursue pleasure, is the issue of feeling entitled to it.
Sara McClelland (2010) recently elaborated on this in her "intimate justice" framework to guide sexual satisfaction research among marginalized populations. After methodically reviewing decades of sexual and life satisfaction research, she argued that external contexts (e.g., pressure to conform to gender roles, stigma against sexuality) can lower what a person feels they deserve sexually and heighten satisfaction ratings (McClelland, 2010).
---
Research on sexual satisfaction and pleasure among women living with HIV
Both qualitative studies (Carlsson-Lalloo, Rusner, Mellgren, & Berg, 2016) and women's own personal testimonies (Becker, 2014;Caballero, 2016;Carta, 2016;Fratti, 2017;Whitbread, 2016) reveal how several social, political, emotional, and relational factors can affect women's experiences of sex. Common concerns reported in the literature include disclosure and its consequences (e.g., rejection, violence), fears of transmitting HIV and challenges discussing safer sex, and external (e.g., HIV non-disclosure laws) and internal (e.g., low self-esteem) HIVrelated stigmatization (Beckerman & Auerbach, 2002;Crawford, Lawless, & Kippax, 1997;Gurevich et al., 2007;Lather & Smithies, 1997;Siegel, Schrimshaw, & Lekas, 2006;van der Straten, Vernon, Knight, Gomez, & Padian, 1998;Welbourn, 2013). For some women, such stressors contribute to feelings of loss of sexuality (Balaile, Laisser, Ransjo-Arvidson, & Hojer, 2007;Gurevich et al., 2007). Studies, thus, suggest many women (though not all) report less satisfaction with their sex lives (Balaile et al., 2007;Hankins, Gendron, Tran, Lamping, & Lapointe, 1997;Siegel et al., 2006) and reduced enjoyment of sex (Closson et al., 2015;Lambert, Keegan, & Petrak, 2005;Siegel et al., 2006) after an HIV diagnosis.
Evidence from large-scale, quantitative studies is relatively limited, however; and, of significance, most findings come from gender-aggregated data. One of the most consistent predictors of sexual satisfaction in the context of HIV has been stigma-related constructs, with lower satisfaction ratings found among those reporting greater sex-negative attitudes, perceived responsibility for reducing the spread of HIV, discrimination in a relationship, and internalized stigma (Bogart et al., 2006;Castro, Le Gall, Andreo, & Spire, 2010;Inoue, Yamazaki, Seki, Running head: PLEASURE AND SATISFACTION AMONG WOMEN WITH HIV 7 Wakabayashi, & Kihara, 2004;Peltzer, 2011). Researchers have also explored the role of age, depression, and education and employment (Bouhnik et al., 2008;Castro et al., 2010;Peltzer, 2011), though only socioeconomic factors have been found to consistently promote satisfaction.
Quantitative studies have not explored relationships well. Studies have focused narrowly on women's relationship status (i.e., married vs. single) and report conflicting findings (Castro et al., 2010;Inoue et al., 2004;Peltzer, 2011). In contrast, results from non-HIV literature emphasize a clear connection between sexual satisfaction and pleasure and numerous indicators of relationship quality such as physical intimacy, emotional closeness, commitment, and gender power relations, among other factors (Haavio-Mannila & Kontula, 1997;Henderson, Lehavot, & Simoni, 2009;Sánchez-Fuentes, Santos-Iglesias, & Sierra, 2014). These studies, however, have failed to account for the multidimensional nature of sexual and intimate partnering, and it is the interaction between relationship dimensions that may be critical to experiences of sexual satisfaction and pleasure.
---
Study objective
In a previous paper, we used latent class analysis (LCA) to model patterns of sexual and intimate relationship experiences among women living with HIV in Canada, uncovering five multi-dimensional latent classes (i.e., no relationship; relationships without sex; and three sexual relationships: short-term, long-term/unhappy, and long-term/happy), which differed on seven indicators of sex, intimacy, and relationship power (Carter et al., 2016). The current paper represents a follow-up to this analysis and is guided by the following objective: to describe women's feelings of sexual satisfaction and pleasure and compare such experiences across these five latent classes, critically examining and adjusting for social and health factors associated with relationship types and predictive of sexual outcomes.
Running head: PLEASURE AND SATISFACTION AMONG WOMEN WITH HIV
---
8
---
METHOD
---
Study design
Data for this analysis came from the baseline questionnaire of the Canadian HIV Women's Sexual and Reproductive Health Cohort Study (CHIWOS, www.chiwos.ca). CHIWOS is a community-based research project of self-identified women living with HIV aged 16 years or older from British Columbia, Ontario, and Quebec (Loutfy et al., 2017). The study is committed to the meaningful involvement of women living with HIV as Peer Research Associates (PRAs) and academic researchers, care providers, and community agencies as allied partners throughout all stages of the research, from the design of data collection tools, through participant outreach and recruitment, to knowledge dissemination activities including scientific co-authorship.
Women living with HIV were recruited into CHIWOS between August 2013 and May 2015, using a comprehensive strategy designed to oversample women from communities traditionally marginalized from research (Webster et al., In Press). After a brief screening interview, PRAs administered FluidSurveys TM questionnaires to women in English (n = 1081) or French (n = 343). Interviews were completed either in-person (at community agencies or women's homes) or via telephone or Skype for those living in rural or remote areas, and lasted an average of 2-hours (interquartile range (IQR): 90-150 minutes). Participants provided voluntary informed consent and were given $50 to honour their time and contributions. We received ethical approval from Simon Fraser University, the University of British Columbia/Providence Health Care, Women's College Hospital, McGill University Health Centre, and community organizations where necessary.
---
Study variables
---
Outcome variables.
Sexual health questions were informed by women living with HIV and aimed at minimizing participant burden. Sexual satisfaction was assessed among all women using one item from the personal contentment domain of the SSS-W (Meston & Trapnell, 2005): "Overall, how satisfactory or unsatisfactory is your present sex life?" Responses were on a five-point scale ranging from "completely," "very," or "reasonably" satisfactory to "not very" or "not at all" satisfactory. The final two categories were collapsed due to low numbers. Sexual pleasure was assessed using one item from the Brief Index of Sexual Functioning for Women (BISF-W) (Taylor, Rosen, & Leiblum, 1994), which read: "During the past month, have you felt pleasure from any forms of sexual experience?" Responses included: "always felt pleasure," "usually, about 75% of the time," "sometimes, about 50% of the time," "seldom, less than 25% of the time," "have not felt any pleasure," and "have had no sexual experience during the past month."
Those with no recent sexual experience were excluded from analyses, with the remainder collapsed into three groups (i.e., always vs. usually/sometimes vs. seldom/none).
---
Explanatory variables.
The main explanatory variable was relationship latent class, derived via LCA. A detailed description of LCA methodology and these relationship types is available elsewhere (Carter et al., 2016). Briefly, LCA is a person-centred approach capable of identifying clusters of individuals that share a common set of characteristics using structural equation modelling of categorical data (Lanza, Bray, & Collins, 2013). In our analysis, we modelled seven indicators: 1) sexual relationship status (a cross of recent consensual oral, anal, or vaginal sexual activity with a regular partner and current relationship status), 2) (dis)contentment with their frequency of sexual intimacy (e.g., kissing, intercourse, etc.), 3) (dis)contentment with the amount of emotional closeness experienced, and, of those with a regular partner (i.e., spouse, common law partner, long term relationship, friend with benefits, or partner seen on and off for some time): 4) relationship duration, 5) couple HIV serostatus, 6) sexual exclusivity, and 7) relationship power (i.e., the Relationship Control sub-scale of the Sexual Relationship Power Scale, developed by Pulerwitz, Gortmaker, and DeJong (2000)). Two items (i.e., emotional closeness and sexual intimacy) came from the SSS-W (Meston & Trapnell, 2005) and bivariable analyses revealed a strong association with reporting a completely satisfactory sex life (data not shown). However, LCA groups women according to their response patterns on multiple variables, which together contribute the underlying meaning of the latent class. Thus, while we acknowledge strong intercorrelations, we questioned whether these two indicators were perfectly aligned with the outcome of overall sexual satisfaction and sought to uncover this in our analysis, exploring how varying levels of physical and emotional intimacy may impact global satisfaction ratings.
As the resulting latent classes are described elsewhere (Carter et al., 2016), we offer a brief description here along with a figure illustrating the latent class structure (Figure 1). The most prevalent class within the entire sample (which we called, no relationship [46.5%]) was comprised entirely of women who reported being single, separated, widowed, or divorced and had not engaged in any consensual oral, anal, or vaginal sexual activity with a regular partner in the past 6 months. The second class (relationships without sex [8.6%]) consisted of women who had similarly not had any recent sex but reported their current legal relationship status as married, common-law, or in a relationship but not living together. Forty three per cent of the women in this class were content with the amount of physical intimacy in their life (or lack thereof), while 27% felt they had enough emotional closeness. The final three latent classes represented distinct types of consensual sexual relationships with a regular partner (short-term [15.4%], long-term/unhappy [6.4%], and long-term/happy [23.2%]). Relative to women in short-term relationships, women in the two longer-term latent classes had much higher probabilities of reporting that they were in a sexually monogamous relationship, were married, common-law, or non-cohabiting, and had been with their partner for ≥ 3 years. These sexual relationships diverged, however, on contentment with physical intimacy (97%-happy vs. 44%-unhappy vs. 46%-short-term) and emotional closeness (86%-happy vs. 24%-unhappy vs. 16%-short-term), high power equity (93%-happy vs. 52%-unhappy vs. 51%-short-term), and the presence of an HIV-negative partner (71%-happy vs. 59%-unhappy vs. 81%-short-term). Further, in bivarable analyses, we found that women in long-term/happy sexual relationships (66.8%) and relationships without sex (50%) were most likely to report "feeling love for and wanted by someone all of the time", compared to women in long-term/unhappy relationships (33.3%), short-term relationships (24.8%), and no relationship (23.5%) (p < .0001).
---
Confounders.
Factors associated with latent class membership in the previous analysis and theorized to be determinants of sexual satisfaction and pleasure were considered as potential confounders (see tables for full derivations and cited literature for scoring instructions). These included: age; annual personal income; education; children living at home; transactional sex; illicit drug use; any physical, verbal, sexual, or controlling violence as an adult or child; use of cART; discussed with a provider how VL impacts HIV transmission risk; post-traumatic stress disorder (PTSD) (score range = 6 -30, ≥ 14 indicating likely PTSD; Cronbach α = .91) (Lang & Stein, 2005); depression (score range = 0 -30, ≥ 10 suggesting probable depression; Cronbach α = .74) (Zhang et al., 2012); sexism/genderism and racism (score range = 8 -48; Cronbach α = .94) (Williams, Yan, Jackson, & Anderson, 1997); and HIV stigma (score range = 0 -100; Cronbach α = .84) (Berger, Ferrans, & Lashley, 2001). Although not independently associated with relationship types (and thus, not meeting confounding criteria), we also examined the following factors in relation to sexual outcomes in bivariable analyses: gender; sexual orientation; ethnicity; time living with HIV; most recent VL; most recent CD4 cell count; and physical and mental-health related quality of life, assessed via the SF-12 (score range = 0 -100, Cronbach α = .82) (Carter, Loutfy, et al., 2017).
---
Analysis plan
---
Final analytic sample.
Overall, 1,424 women living with HIV were enrolled in CHIWOS, but only 1,334 were included in the previous LCA owing to missing relationship data. Of these 1,334 women, 1,230 responded to the aforementioned question about sexual satisfaction, while 675 reported on pleasure from any forms of sexual experience in the past one-month. For regression analyses of sexual satisfaction, we excluded another 163 women who responded, "don't know" or "prefer not to answer" to confounders, resulting in a final analytic sample of 1,067 for both unadjusted and adjusted analyses (80.2% of the total sample). For pleasure, the final sample size for multivariable comparisons was 567 (41.6% of the total sample).
---
Descriptive, bivariable, and multivariable analyses.
Baseline characteristics were reported on all 1,334 women comprising the LCA, using frequencies (n) and percentages (%) for categorical variables, and medians (M) and interquartile ranges (Q1, Q3) for continuous variables. Bivariable analyses were conducted of the explanatory variable (relationship types) and confounders by both sexual satisfaction (n = 1230) and pleasure (n = 675). Crude associations were tested using the Pearson χ 2 test or Fisher's exact test for categorical variables and Kruskal Wallis Test for continuous variables. Those with a p-value of <0.2 (Kaida et al., 2015) and previously associated with relationship types (Carter et al., 2016) were examined in further analyses. Binomial and multinomial logistic regression (the latter adjusting for factors meeting confounding criteria) were used to investigate how relationship types were associated with increased odds of feeling completely, very, or reasonably satisfied with one's sexual life, using not very/not at all satisfied as the referent, with unadjusted and adjusted odds ratios (ORs and AORs) and 95% confidence intervals (CIs) reported. Procedures were repeated to explore the link between relationship types and an increased odds of always or usually/sometimes feeling sexual pleasure, using seldom/not at all as the referent. To compare all latent classes, we ran multiple models, each time using a different latent class as the reference group. Analyses were conducted using SAS® version 9.3 (SAS, North Carolina, United States).
---
RESULTS
---
Social and health circumstances of women's lives
The 1,334 women living with HIV included in baseline analyses were diverse in gender (4.3% trans), sexual orientation (12.5% LGBTQ), ethnicity (22.3% Indigenous; 28.9% African/Caribbean/Black; 41.2% White), socio-economic status (71.4% personal income <$20,000 CAD, 18.1% current illicit drug use, 6.2% current sex work), age (median: 42.0 years; IQR: 35.0, 50.0; range: 16 -74), and time living with HIV (median: 10.8 years; IQR: 5.9, 16.8; range: 1 month -33.7 years). Nearly one-quarter (22.8%) had biological children living at home with them. Nearly half had depression and PTSD symptoms, and 80.4% reported lifetime experiences of violence. Most were taking cART (82.7%) and had an undetectable VL (81.5%). About two-thirds (68.8%) had talked to their doctor about its impact on transmission. Table 1 shows other social and health factors as well as levels of sexual satisfaction and pleasure.
---
Experiences of sexual satisfaction and pleasure
Of women with sexual satisfaction ratings (n = 1,230), 21.0% and 17.1% reported being completely and very satisfied with their sex lives, respectively, with the remainder feeling reasonably (30.9%) or not very/not at all satisfied (30.9%). Overall, 51.8% of the cohort stated they had some form of sexual experience in the past month (n = 675), including 22.5% of women in no relationship and 21.7% of women in relationships without sex. Of these 675 women, 41.3% always and 38.6% usually/sometimes felt pleasure from sexual experience, while 20% reported seldom/no pleasure. Satisfaction and pleasure were correlated but not identical constructs: among those who always felt pleasure, 47.6% were completely and 28.9% were very satisfied with their sex life (vs. reasonably [14.4%] and not very/not at all satisfied [9.0%]; data not shown).
---
Patterns of sexual satisfaction and pleasure by relationship types
As highlighted in Table 2, approximately half (48.7%) of the women in long-term/happy sexual relationships (defined by the highest levels of love, physical and emotional intimacy, shared power, and mixed HIV status) were completely satisfied with their sexual life, while 32.0% were very and 17.3% reasonably sexually satisfied; just 2% (n = 6) said not at all/not very satisfied. The opposite pattern was found for women in no relationship, of whom 44.4% (n = 237) were not very/not at all satisfied; although the remainder were satisfied at some level with their sexual life (i.e., 30.9% reasonably, 12.4% very, and 12.4% completely). Of the three remaining latent classes (all with similar levels of physical intimacy), women in relationships without sex were more likely to report that overall, their present sex life was completely satisfactory (20.4%) than women in short-term (7.6%) and long-term/unhappy (8.2%) sexual relationships.
In terms of sexual pleasure (Table 3), 64.2% of women in long-term/happy sexual relationships reported that they always felt pleasure from any forms of sexual experience during the past month, while 33.9% usually/sometimes felt pleasure and 2.8% experienced seldom/no pleasure. Reports of always feeling pleasure were much lower among women in short-term sexual relationships (30.7%), and even lower among those in long-term/unhappy sexual relationships (16.2%, characterized by longer duration and more HIV-positive partners). For women in no relationship or relationships without sex, about one-quarter reported always feeling pleasure during their sexual experiences.
As seen in both tables, sex did not equate with satisfaction or pleasure, as some women were completely satisfied without sex (i.e., 12.4% no relationship, 20.4% relationships without sex), while others were having sex without reporting pleasure (i.e., 24.2% short-term, 21.6% long-term/unhappy).
---
Patterns of sexual satisfaction and pleasure by social and health factors
In terms of social and health covariates, sexual satisfaction was crudely associated with age, sexism/genderism, annual personal income, education, PTSD and depressive symptoms, violence as an adult and as a child, cART, discussed with provider how VL impacts transmission risk, and HIV stigma, all of which were associated with relationship types in our previous LCA paper (Carter et al., 2016). With the exception of income, these same factors showed crude associations with sexual pleasure, along with three additional influences (i.e., transactional sex, illicit drug use, and children at home). Gender and sexual orientation were not associated with relationship types or sexual satisfaction and pleasure, while ethnicity was only associated with sexual satisfaction: specifically, Indigenous women were more likely to be completely sexually satisfied (27.8%) compared to women of all other ethnicities (18.1 -20.5%), while African, Caribbean, and Black women reported the highest rates of sexual dissatisfaction (38.5%) versus their peers (range: 19.9 -33.7%). Since, however, ethnicity was not a determinant of relationship types (the second criterion for confounding), it was excluded from the multivariable confounder analyses. Clinical factors (e.g., VL, CD4 count) were not examined further for the same reason.
---
Multivariable confounder analysis of sexual satisfaction
In adjusted analyses, women in long-term/happy sexual relationships had much greater odds of reporting satisfaction with their sexual life than women in all other latent classes, with the greatest effects seen relative to no relationship, and the weakest in relation to relationships without sex (Table 4, n = 1,067). Additionally, the effect estimates were generally strongest at the highest level of sexual satisfaction ("completely") and gradually decreased in strength through to the middle ("very") and lowest level of satisfaction ("reasonably"), all relative to "not very/not all" satisfied. For instance, after adjusting for confounders, the odds of feeling completely satisfied with one's sex life (vs. not very/not all) were 94 times greater among women in long-term/happy relationships than women in no relationships (AOR = 94.05, 95% CI = 35.75, 247.44). The extremely large estimates and wide CIs indicate a strong predictor and reflect the fact that very few women in long-term/happy relationships were not very/not at all satisfied (n = 6 [2.0%]) versus many women in no relationship (n=237 [44.4%]). Much lower effect estimates (i.e., less than 2) were observed for all other relationship comparisons. For instance, women in relationships without sex also had increased adjusted odds of reporting that their sex life was completely satisfactory, relative to women in no relationship (although the 95% CI included the null value) (AOR = 1.88, 95% CI = 0.98, 3.63). There were no differences when comparing short-term and long-term/unhappy relationships to no relationships (referent) at the highest outcome level (i.e., completely satisfied), but higher AORs were seen at the remaining two outcome levels (i.e., very and reasonable satisfied). Likewise, there were also no differences when women in relationships without sex were used as the referent.
In terms of confounding factors, women with depression (AOR = 0.32, 95% CI = 0.20, 0.53) and currently experiencing violence (AOR = 0.38, 95% CI = 0.18, 0.82) had reduced odds of reporting a completely satisfactory sex life. Older age (AOR = 0.89, 95% CI = 0.73, 1.09, per 10-year increase in age) and HIV stigma (AOR = 0.98, 95% CI = 0.87, 1.09) also had reduced effects on sexual satisfaction, though the estimates were smaller and patterns non-significant (i.e., the 95% CI included the null value). Women with higher than high school education also had lower AORs for being completely satisfied relative to women with lower than high school education (AOR = 0.46, 95% CI = 0.24, 0.86), as did women who had discussed with their provider how VL impacts transmission risk (AOR = 0.67, 95% CI = 0.43, 1.05).
---
Multivariable confounder analysis of sexual pleasure
In regards to sexual pleasure (Table 5, n = 567), women in long-term/happy sexual relationships had greater adjusted odds of reporting that they always felt pleasure during any sexual experiences versus seldom/no pleasure, relative to those in long-term/unhappy relationships (AOR = 41.02, 95% CI = 11.49, 146.40) and those in short-term relationships (AOR = 11.83, 95% CI = 4.29, 32.59). The strength of association was reduced at the outcome level of "usually/sometimes" felt pleasure but nonetheless elevated (i.e., referents: longterm/unhappy: AOR = 4.84, 95% CI = 1.66, 14.09; short-term: AOR = 6.48, 95% CI = 2.40, 17.47). In comparing women in long-term/unhappy relationships versus short-term relationships, the adjusted odds of always feeling pleasure during sexual experiences were reduced for the former group by 71% (AOR= 0.29, 95% CI = 0.10, 0.87). No significant differences in the experiences of pleasure were observed when comparing those in no relationships to those in relationships without sex.
In terms of confounders, as with sexual satisfaction, women experiencing depression Running head: PLEASURE AND SATISFACTION AMONG WOMEN WITH HIV 18 (AOR = 0.46, 95% CI = 0.24, 0.91) and current violence (AOR = 0.21, 95% CI = 0.06, 0.73) had lower adjusted odds of reporting that they always felt pleasure. Current transactional sex, while not included in the satisfaction model, was also associated with a significant reduction in always feeling pleasure (AOR= 0.16, 95% CI = 0.05, 0.52). Similar to the previous model, small and non-significant associations with pleasure were seen for older age (AOR = 0.82, 95% CI: 0.59, 1.13) and HIV stigma (AOR= 0.88, 95% CI = 0.74, 1.03). On the other hand, two contrasting findings were seen in relation to higher than high school education (AOR = 2.22, 95% CI = 0.94, 5.22) and having discussed with a provider how VL impacts transmission risk (AOR= 1.87, 95% CI = 1.00, 3.50), with higher (i.e., above 1) AORs for always reporting pleasure observed rather than lower (i.e., below 1) AORs as seen with satisfaction.
---
DISCUSSION
This analysis revealed positive dimensions of sexual health for women living with HIV in Canada: 69% of women in our cohort were satisfied, to some extent (i.e., reasonably, very or completely), with their sexual life (or lack thereof), and among those with recent sexual experiences, 41.3% reported always feeling sexual pleasure. This finding disrupts narratives of sexual danger in the context of HIV and demonstrates to women living with HIV, and to society, that many women can and do enjoy their sexual lives following a diagnosis of HIV. Yet access to a satisfying and pleasurable sex life was not equal amongst women in our cohort. A key finding was that women in long-term/happy relationships (characterized by higher levels of love, greater physical and emotional intimacy, more equitable relationship power, and mainly HIV-negative partners) had the highest degree of sexual satisfaction and pleasure. It is noteworthy, however, that some women in this cohort were sexually satisfied despite being in no relationship or a nonsexual relationship. Our analysis also highlighted how social status and mental health are related to sexual satisfaction and pleasure. These findings fill important knowledge gaps pertaining to how relational dynamics, social inequities, and trauma impact positive and rewarding aspects of sexuality for women living with HIV, an under-studied population in the field of sexual science.
The overall prevalence of sexual satisfaction in our analysis is similar to that reported for other HIV cohorts (Castro et al., 2010;Lambert et al., 2005), but lower than some general population estimate papers (i.e., 75 -83%) (Colson, Lemaire, Pinton, Hamidi, & Klein, 2006;Dunn, Croft, & Hackett, 2000). The differences may be due to the effects of living with HIV or other social factors that disproportionately impact women living with the virus such as violence and chronic depression (Machtinger, Wilson, Haberer, & Weiss, 2012). However, it remains difficult to draw conclusive interpretations and to compare to other, more recent studies (Heiman et al., 2011;Henderson et al., 2009;Schmiedeberg & Schröder, 2016;Velten & Margraf, 2017), as researchers have used various single-and multi-item instruments (with slight differences in question wording and response scales) and have commonly focused exclusively on sexually active individuals in relationships (del Mar Sánchez-Fuentes, Santos-Iglesias, & Sierra, 2014).
Conversely, our prevalence of sexual pleasure is higher than that reported by one previous HIV study (Hankins et al., 1997), conducted early in the epidemic. Thirty-three per cent of women living with HIV in that study reported feeling little to no sexual pleasure during recent sexual activity, compared to just 20% of women in our analysis. As both scales used the same time frame, phrasing, and study population, this improvement in time could reflect the repositioning of HIV as a chronic disease today, which may reduce fears of transmission and maximize women's enjoyment of sex.
The finding that women in long-term/happy relationships were more likely to feel that their present sex life was, overall, either completely, very, or reasonably satisfactory compared to Running head: PLEASURE AND SATISFACTION AMONG WOMEN WITH HIV 20 women in all other relational contexts is consistent with other results showing the quality of a relationship with a partner can impact the quality of women's sex life, both within (Castro et al., 2010;Inoue et al., 2004;Peltzer, 2011) and outside the HIV field (Haavio-Mannila & Kontula, 1997;Henderson et al., 2009;Sánchez-Fuentes et al., 2014). Previous studies, though, focused on singular dimensions. For example, some reported longer relationship duration predicts lower sexual satisfaction due, in part, to more familiar, routine sex (Carpenter, Nathanson, & Kim, 2009;Liu, 2003;Pedersen & Blekesaune, 2003;Schmiedeberg & Schröder, 2016). Yet, within long-term committed relationships, women can have varying experiences of sexual satisfaction based on other critical subtleties of relationships, as seen with the long-term/happy and longterm/unhappy latent classes in our analysis (of which, the latter had lower levels of love, power, intimacy, and HIV-negative partners and were less likely to be satisfied sexually). This finding underscores the importance of considering the interaction of several relationship variables. It also highlights how partaking in sex does not universally mean a woman is enjoying a satisfying sex life, adding to previous literature among women without HIV (Fahs & Swank, 2011).
With regard to sexual pleasure, we found that women in long-term/unhappy relationships also had significantly reduced odds of always feeling pleasure compared to women in short-term and long-term/happy relationships. The former comparison (i.e., long-term/unhappy to shortterm) may indicate that, when indicators of intimacy and power are equal, newer relationships are more sexually gratifying, as observed in past HIV research (Hankins et al., 1997). It may also point to a role of couple HIV serostatus, as HIV-positive partners were more common in longterm/unhappy relationships and previous research suggests some women may stay in these relationships simply because of shared status, fearing that no HIV-negative person would want to be with them (Keegan, Lambert, & Petrak, 2005;Lawless et al., 1996;Nevedal & Sankar, 2015).
Running head: PLEASURE AND SATISFACTION AMONG WOMEN WITH HIV 21 Yet relationships and pleasurable sex is possible with HIV-negative people, as seen for women in the long-term/happy latent class (of which 71% had HIV-negative partners and 64.2% always felt pleasure), corroborating past research linking pleasure to power equity (Holland, Ramazonoglu, Sharpe, & Thomson, 1992), physical and emotional intimacy (Muhanguzi, 2015), and other relational factors (Carpenter et al., 2009). This finding subverts a common assumption that couples with differing HIV statuses are plagued by sexual challenges (Beckerman & Auerbach, 2002;Bunnell et al., 2005;Lawless et al., 1996;Rispel, Metcalf, Moody, Cloete, & Caswell, 2011;Siegel et al., 2006;van der Straten et al., 1998). Clearly, HIV "serodiscordance" does not necessarily mean sexual discord. In fact, serodiscordance may even enhance intimacy for women through the process of partner acceptance and validation (Persson, 2005), which may reduce internalized stigma and facilitate self-acceptance, all leading to more capacity for trust, intimacy, and pleasure.
Beyond relationships, our findings highlight how sexual experiences are also shaped by a number of important social factors. Women living with HIV experience high rates of violence (Logie et al., 2017), depression, and trauma (Machtinger et al., 2012). Our results show that these stressors can greatly affect experiences of both sexual satisfaction and pleasure, consistent with findings outside the HIV field (del Mar Sánchez-Fuentes et al., 2014). Involvement in transactional sex is also more common among women living with HIV, though it negatively affected reports of sexual pleasure only. Conversely, factors associated with increased sexual pleasure included higher education and provider communication about the science of transmission, while these same factors predicted lower odds for sexual satisfaction. The former findings are consistent with previous research linking higher social status to sexual pleasure (Sanchez et al., 2005), likely through enhanced sexual agency (Bay-Cheng & Eliseo-Arras, Running head: PLEASURE AND SATISFACTION AMONG WOMEN WITH HIV 22 2008). They may also signify the sexually liberating potential of the prevention benefits of cART for some women (Persson, 2016), though important inequities in awareness of this science and in treatment remain (Carter, Roth, et al., 2017;Patterson et al., 2017). Regarding the latter finding (on satisfaction), one interpretation may be that women who are more highly educated and have talked to their doctor about this global strategy are less satisfied because they have higher internal expectations for their sex lives (McClelland, 2010).
Collectively, these findings expand the literature on the sexuality of women living with HIV, while also making a number of contributions to the broader science of women's sexuality.
First and foremost, critical sexuality researchers have emphasized the importance of centering discussions of abject bodies within the sexuality field (Fahs & McClelland, 2016). This study constitutes an important example of how to engage with this goal. By reframing the sexual experiences of women who are living with HIV away from contagion, as women with other sexually transmitted infections (Nack, 2008) and severe mental illness (Davison & Huntington, 2010) have done, we can build an evidence-base that de-stigmatizes sexuality for marginalized and excluded groups. The findings also make visible the relational and social powers that influence women's sexuality. Many of these factors (e.g., sex work, drug use, violence at war, PTSD) are invisible in current literature, as psychological studies often rely on university samples. Finally, from a methodological point of view, this paper demonstrates the utility of feminist quantitative approaches in understanding and supporting women's sexual lives. LCA, in particular, offers a rich area of study for measuring dynamic patterns of sex and relationship experience.
---
Limitations
A significant limitation of this study is that the measures used to assess sexual satisfaction and pleasure were broad, whereas the underlying concepts are comprehensive and multifaceted (Opperman, Braun, Clarke, & Rogers, 2014;Pronier & Monk-Turner, 2014).
Choice of measurement should be informed by the research question; however, this study was a tertiary objective of the larger parent study. Our questionnaire had a total of nine sections (Abelsohn, 2014), just one of which was specific to sexual health. Of relevance to feminist community-based research, we prioritized questions that were most important to women with HIV and sought to balance participant burden with scientific rigor, a frequent challenge in research with vulnerable populations (Ulrich, Wallen, Feister, & Grady, 2005). While our singleitem assessments precluded us from understanding the multiple dimensions of these constructs, it is worth noting that a recent review of sexual satisfaction tools found that just one question can meet some psychometric criteria and is enough if cost or participant burden is a concern (although this item was not from the SSS-W) (Mark et al., 2014). Nonetheless, future research should examine these experiences using the full range of items included in validated scales.
We also acknowledge that we did not assess how women were interpreting "sexual satisfaction" and "sexual pleasure." While these experiences may be quite personal in nature (i.e., what brings one woman sexual enjoyment may not pleasure another woman), appraisals may be subject to gender norms, social stigma, and other factors (McClelland, 2010(McClelland, , 2011(McClelland, , 2013)). For instance, some women may consider their partner's satisfaction in their own selfratings (McClelland, 2011), or pleasure may be experienced or interpreted differently across age groups (Taylor et al., 2016). Data may also be affected by social desirability bias, such that sexual satisfaction and/or pleasure were over-reported. We aimed to minimize the effect of such biases, through the involvement of women living with HIV in the design and administration of the survey, as well as intensive survey training and piloting procedures.
Another important limitation is that we provided no definition for "sexual experiences," which, depending on the person, may include oral, vaginal, and/or anal sex as well as a broader range of activities such as kissing, touching, masturbation, and so forth (Peterson & Muehlenhard, 2007;Sanders et al., 2010). Given the varied meanings of the same construct, it remains difficult to make conclusions about the kinds of activities that are eliciting pleasure as well as reports of pleasure among women in no relationships and relationships without sex.
Future HIV studies should assess these constructs in surveys more carefully. Future work should also explore how physical health (e.g., vaginal pain, disabilities, general ill-health) may influence sexual enjoyment, as these data was not collected in our survey. Some effect estimates for sexual satisfaction were extremely large with wide CIs, chiefly for the long-term/happy versus no relationship comparison because of high correlations with two LCA indicators (i.e., physical intimacy and emotional closeness). These results should be interpreted cautiously. Interestingly though, these measures were not perfectly correlated in our study, since three classes (i.e., relationships without sex, short-term, and long-term/unhappy) had similar levels of physical intimacy but differed in terms of overall satisfaction, perhaps owing to differing emotional closeness, couple HIV serostatus, or other unmeasured factors (e.g., trust, communication). Future work should assess additional aspects of relationships (including nonsexual dynamics) and explore their relative importance. This topic is particularly ripe for exploration qualitatively, and studies should explore women's narratives about feeling sexually happy and having great sex to help increase possibilities for women living with HIV to enjoy their sexuality.
While this research has limitations, it focuses on a much-needed area of sexual health for women living with HIV. Additional critical studies on sexual rights and social justice in the context of HIV are necessary.
---
Implications
Sexual satisfaction and pleasure were greatest in long-term/happy relationships, underscoring the centrality of love, intimacy, and power to positive sexual outcomes. However, it is important to acknowledge that all consensual relationship types are valid, and to avoid discourses that position women's pursuit of pleasure as proper only in the context of committed, long-term relationships (Fahs, 2014;Holland, Ramazanoglu, Scott, Sharpe, & Thomson, 1990).
Women deserve to have the type of relationship they want (inclusive of no sex and both serious and casual relations), and they should be free to pursue pleasurable and satisfying sexual experiences regardless. Thus, we advocate for the need for interventions to 1) improve unequal sexual power within all relationships and between different socio-demographic groups, 2) promote sexuality and HIV education (including the right to autonomy, mutual pleasure, and the science of HIV transmission), and 3) address the social impediments to women's sexual wellbeing, especially stigma, violence, and trauma of various kinds. By doing so, all women living with HIV may be able to more easily negotiate and fight for sexual satisfaction and pleasure in their lives.
---
Conclusions
This research provides an alternative, pleasure-focused narrative that is largely absent in quantitative research on sexuality among women living with HIV, one that supports women's right to sexual satisfaction and pleasure while simultaneously uncovering the factors that can deny women these rights. In making perspectives like these more visible, and through disseminating positive accounts of sexuality, we hope women living with HIV will feel less alone and more empowered to lead the sexual lives they really want. We call on providers and researchers to support women in this endeavour by talking about and studying the rewarding aspects of sexuality and relationships, including non-sexual relationships that can bring joy to women's lives. Not only is researching and promoting sexual satisfaction and pleasure important for pleasure's sake, but it may also contribute to positive outcomes across multiple dimensions of well-being and sexual health (Herbenick et al., 2009;Higgins, Mullinax, Trussell, Davidson Sr, & Moore, 2011;Hogarth & Ingham, 2009;Smiler, Ward, Caruthers, & Merriwether, 2005).
---
Mental health and violence factors
Mental health-related quality of life 46.5 (34.3, 55.6) 40.6 (29.8, 51.8) 33.3 (23.9, 44.4 | 49,521 | 1,420 |
e8fe5017613a38903513363950590c162f756e83 | Protective Factors for Depression Among African American Children of Predominantly Low-Income Mothers with Depression | 2,013 | [
"JournalArticle"
] | Maternal depression has a deleterious impact on child psychological outcomes, including depression symptoms. However, there is limited research on the protective factors for these children and even less for African Americans. The purpose of the study is to examine the effects of positive parenting skills on child depression and the potential protective effects of social skills and kinship support among African American children whose mothers are depressed and lowincome. African American mothers (n = 77) with a past year diagnosis of a depressive disorder and one of their children (ages 8-14) completed self-report measures of positive parenting skills, social skills, kinship support, and depression in a cross-sectional design. Regression analyses demonstrated that there was a significant interaction effect of positive parenting skills and child social skills on child depression symptoms. Specifically, parent report of child social skills was negatively associated with child depression symptoms for children exposed to poorer parenting skills; however, this association was not significant for children exposed to more positive and involved parenting. Kinship support did not show a moderating effect, although greater maternal depression severity was correlated with more child-reported kinship support. The study findings have implications for developing interventions for families with maternal depression. In particular, parenting and child social skills are potential areas for intervention to prevent depression among African American youth. | Introduction
Major Depressive Disorder (MDD) is experienced by approximately 16% of Americans in the course of their lives (Kessler, Chiu, Demler, & Walters, 2005) and is expected to be the leading cause of disability among all diseases by the year 2030 (World Health Organization, 2008). Although recent research has reported that rates of MDD are lower for African Americans than for the general population (Breslau et al., 2006;Williams et al., 2007), depression is significant for African Americans for several reasons. When African Americans experience MDD, the disorder is often more severe and poses a greater burden than observed with other ethnic groups (Williams et al., 2007). African Americans with depression are also less likely to utilize treatment services (Garland et al., 2005;Neighbors et al., 2007). Specifically, a recent study found that 40% of African Americans with MDD received treatment compared to 54% of non-Latino Whites with MDD, suggesting a significant health disparity (González et al., 2010).
Maternal depression is also a significant issue for African American families, as demonstrated by a recent study, which found that the lifetime prevalence of MDD for African American mothers was 14.5% (Boyd, Joe, Michalopoulos, Davis, & Jackson, 2011). Despite the substantial amount of research on maternal depression, African American families with maternal depression are understudied. This is a critical area of research because African American women and their children are disproportionately confronted with environmental and life stressors that may increase their vulnerability to depression (Goodman et al., 2011;Riley et al., 2009).
The children of mothers with depression are at risk for a range of negative developmental and psychological outcomes. For example, they are more likely to be depressed or anxious themselves, and more likely to have problems with disruptive and oppositional behavior (Goodman et al., 2011;Luoma et al., 2001). Longitudinal research has shown that the negative effects of maternal depression begin in childhood and continue into adolescence and adulthood (Campbell, Morgan-Lopez, Cox, McLoyd, & National Institute of Child, Health and Human Development Early Child Care Research Network, 2009;Lewinsohn, Olino, & Klein, 2005;Weissman et al., 2006). A 2011 meta-analysis of 193 studies found significant small-magnitude effects of mothers' depression on children's outcomes, including both internalizing and externalizing behavior (Goodman et al., 2011). Although only a small number of the studies assessed ethnic minorities, the relationship between maternal depression and negative child outcomes was shown to be even stronger among these populations.
---
Mechanisms for transmission of depression
In order to prevent or reduce depression in African American children, it is important to consider the processes by which depression develops. Hammack's (2003) integrated theoretical model for the development of depression in African American youth outlines a potential pathway starting with social and environmental stress, leading to parent psychopathology and subsequent impaired parenting, which then results in youth depression. Several studies have found evidence that family environment and parent-child interactions impact the transmission of depression (Carter, Garrity-Rokouys, Chazen-Cohen, Little & Briggs-Gowan, 2001;Jones, Forehand, & Neary, 2001). Specifically, mothers' depressive thoughts and behaviors may prevent them from engaging in more positive parenting behaviors that would better meet children's emotional and developmental needs (Goodman, 2007;Goodman & Gotlib, 1999). Depression has also been shown to interfere with effective parenting by making mothers less responsive to their children or less supportive, and by increasing the use of negative or harsh parenting behaviors (Mitchell et al., 2010). In a metaanalysis of 46 observational studies of the relationship between depression and parenting, Lovejoy, Graczyk, O'Hare, and Neuman (2000) found that mothers with depression showed significantly higher levels of negative parenting behaviors, were significantly more disengaged, and demonstrated significantly less positive parenting behavior.
On the other hand, there is recent research suggesting that the use of positive parenting practices (e.g., praise, encouragement of appropriate behavior) buffers children from the impact of their mother's depression. This is an understudied area; however, it has been found that maternal symptoms of depression impact a young child less when the mother is more responsive and affectionate (Leckman-Westin, Cohen, & Stueve, 2009). Other studies with African American and Caucasian adolescents have found that positive or supportive parenting is associated with lower rates of depression and anxiety currently and six and twelve months later (Compas et al., 2010;Jones, Forehand, Brody, & Armistead, 2002;Zimmerman, Ramirez-Valles, Zapert, & Maton, 2000). Although there is some good evidence of the beneficial impact of positive parenting practices, further examination of their potential protective role in families with maternal depression is needed.
---
Protective Factors
Children's social skills are another source of resilience for children at risk for negative outcomes (Luthar, Cicchetti, & Becker, 2000). There is evidence that children's social competence is linked to positive psychosocial and educational outcomes (Ladd, 1990;McClelland, Morrison, & Holmes, 2000;Welsh, Parke, Widaman, & O'Neil, 2001). At the same time, studies of pre-adolescent and adolescent depression have determined that poorer social skills and deficits in social problem-solving are significantly related to youth depression symptoms (Becker-Weidman, Jacobs, Reinecke, Silva, March, 2010;Frye & Goodman, 2000;Ross, Shochet, & Bellair, 2010). Unfortunately, social skill development may be impeded for children whose mothers have depression. These children are less likely to be exposed to enriching social situations with peers and positive adults, and more likely to observe and learn their mother's negative cognitive style as it relates to social interactions (Hipwell, Murray, Ducournau, 2005;Silk, Shaw, Skuban, Oland, & Kovacs, 2006;Taylor & Ingram, 1999;Wu, Selig, Roberts, & Steele, 2010). On the other hand, coping efficacy, emotion regulation skills, and social skills have been shown to foster resiliency among children exposed to maternal depression (Beardslee & Podorefsky, 1988;Riley et al., 2008;Silk et al., 2007). As such, we hypothesize that children's social skills buffer them against the lower levels of positive parenting behavior often associated with maternal depression.
Kinship support is another factor proven to buffer children from negative psychosocial outcomes. Research with urban African Americans has shown that kinship support moderates the effect of negative family interactions on children's and adolescents' internalizing and externalizing behavior (Li, Nussbaum, & Richards, 2007;Taylor, 2010). Higher levels of kinship support have been found to be associated with greater maternal warmth, emotional support, and better maintenance of routines within the family (Taylor, 2011). In this same study, Taylor found that the beneficial impact of kinship support on mother's supportive parenting behavior was less for mothers with more depression symptoms. In a different sample of mothers with depression, mothers' lower satisfaction with their social support networks was associated with more internalizing disorders in their children one year later (McCarty, McMahon, Conduct Problems Research Group, 2007). While there is good preliminary evidence for the protective function of kinship support in families with maternal depression, its role along with other protective factors in preventing children's depression merits closer examination.
In the present study, we examine the effects of positive parenting behaviors on child depression and the potential protective effects of social skills and kinship support among low-income African American children whose mothers are depressed. Specifically, we will test whether kinship support and child social skills moderate the impact of positive parenting skills on children's symptoms of depression. We hypothesize that more positive and involved parenting practices will be associated with less child depression. We also hypothesize that both kinship support and child social skills will serve as protective factors and moderate the impact of positive parenting skills on child depression.
---
Method Participants
The participants were 77 mother-child dyads. The children ranged in age from 8 to 14 years with a mean age of 11.1 (SD = 2.0) years. Their school grade ranged from second to tenth with a mean of grade 5.6 (SD = 2.1). Approximately half (58%; n = 45) of the children were female. All mothers identified their children as African American, however, 7.8% (n = 6) also identified with other races (i.e., White, Native Hawaiian/Pacific Islander, Asian, American Indian/Alaskan Native). Five children (6.5%) were also of Latino ethnicity.
The mothers ranged in age from 23 to 63 years with a mean age of 38.6 (SD = 7.4) years. All mothers identified their race as African American, with 6.5% (n = 5) also identifying with other races (i.e., White, Native Hawaiian/Pacific Islander, Asian, American Indian/ Alaskan Native) and 1.3% (n = 1) also identifying with Latino ethnicity. The majority of the mothers were never married (63.6%, n = 49), while 15.6% (n = 12) were married or living with a partner and 20.8% (n = 16) were separated, divorced or widowed. The majority of the mothers received public assistance (59.2%, n = 45). Total household income for the sample was as follows: 33.8 % (n = 26) between $0-$10,000; 28.6% (n = 22) between $10,001-$20,000; 9.1% (n = 7) between $20,001-$30,000; 11.7% (n = 9) between $30,001-$40,000; 6.5% (n = 5) between $40.001-50,000; and 6.5% (n = 5) $50,000 or greater. Data was not available for three households. In terms of education level, approximately 72% (n = 55) of the mothers had either high school degree equivalency or higher. Specifically, 22% (n = 17) were high school graduates or obtained a GED, 32% (n = 24) attended some college or vocational school, 9% (n=7) graduated from vocational school, and 9% (n =7) graduated from college or higher. In the majority (90.9%; n = 70) of the mother-child dyads, the mother was the child's biological parent.
---
Procedures
Participants were drawn from two related studies focusing on maternal depression within African American families. Mothers were eligible for the study if they: 1) were African American; 2) had a primary current or past-year psychiatric diagnosis of MDD, Dysthymic Disorder or Depressive Disorder, Not Otherwise Specified ; and 3) were the primary caregiver of a school-age child who resided with them on at least a part-time basis. Mothers could not have: 1) a history of Bipolar Disorder or any psychotic disorder; 2) current or past year diagnosis of substance dependence; or 3) mental retardation (determined by mothers stating that they had been diagnosed with mental retardation within their lifetime). Children reported by their mothers as having a diagnosis of mental retardation were also were excluded from the study.
Study participation involved three steps. First, mothers completed a telephone screening to assess their preliminary eligibility for the study. If appropriate, the diagnostic eligibility of the mothers was then determined by a clinical interview (Structured Clinical Interview for DSM-IV-TR Axis I Disorders, First, Spitzer, Gibbon, & Williams, 2001) conducted by the primary author (a licensed clinical psychologist). Finally, eligible mothers and one of their children completed a battery of questionnaires read aloud by research staff. Mother and child were each paid $20 for the assessment interview. The consent process was conducted in person by the principal investigator or another member of the research staff such that the study team obtained written consent for participation from the mother and verbal assent from the child. The studies were approved by the Institutional Review Boards of the Children's Hospital of Philadelphia, the University of Pennsylvania, and the Philadelphia Department of Public Health.
---
Recruitment
The principal investigator and research staff developed relationships with staff at clinic and community sites throughout a large metropolitan area in order to recruit study participants. These recruitment sites included outpatient mental health agencies, other research studies, homeless shelters, schools, and health fairs. Recruitment flyers were also given to community site staff for dissemination and put on public display at participating sites. Additionally, recruitment advertisements were placed in several local newspapers. Interested participants contacted research staff via telephone or completion of consent to contact forms at recruitment sites. The largest recruitment sources were newspaper advertisements and research studies. To facilitate recruitment, childcare was provided and participants received bus tokens or reimbursement for parking costs.
---
Measures
To assess positive parenting skills, mothers completed the Parenting Practices Scale (Tolan, Gorman-Smith & Henry, 2000), which has four scales: Positive Parenting, Extent of Involvement in the child's life, Discipline Effectiveness, and Avoidance of Discipline. The Positive Parenting scale assesses the use of rewards and encouragement of appropriate behavior. The Extent of Involvement scale assesses parents' involvement in the child's daily activities and routines. For the current study, the Positive Parenting and Extent of Involvement scales were summed for a Positive Parenting Skills total score. The Discipline scales were not utilized in the current study as they are more relevant for delinquent youth and do not assess positive parenting skills. Confirmatory factor analyses demonstrated a latent construct representing both positive parenting and extent of involvement (Gorman-Smith, Tolan, Henry, & Florsheim, 2000;Gorman-Smith, Tolan, Zelli & Huesmann, 1996) supporting the validity of the Positive Parenting Skills total scale. The scales of the Parenting Practices Scale have previously demonstrated adequate internal consistency (.78 -.84) with caregivers of urban youth (Gorman-Smith et al., 1996;Tolan et al., 2000). In the current sample, the Cronbach alpha coefficient was .77 for the overall Positive Parenting Skills total score, .85 for the Positive Parenting scale, and . 63 for the Extent of Involvement scale.
The Social Skills Rating System (SSRS; Gresham & Elliott, 1990) was used to assess children's social skills (i.e., cooperation, assertion, responsibility, empathy, and self-control). The SSRS has child-report and parent-report versions for different developmental levels. For purposes of the present study, total standard scores were combined across elementary and secondary levels. The standard scores are based on normative data for gender and grade and provide an equivalent metric across the multiple versions of the SSRS. The child-report version has good internal consistency (α = .83) and adequate four-week test-retest reliability (r = .68). The child-report and parent-report versions for children from kindergarten to 12 th grade demonstrate adequate reliability and validity (Gresham & Elliott, 1990). The Cronbach alpha coefficients for the child-report version with the current sample are .87 for elementary-age children and .91 for secondary school-age children. The Cronbach alpha coefficients for the parent report version are .70 for elementary-age children and .72 for secondary school-age children (Gresham & Elliott, 1990).
The Kinship Support Scale (Taylor, Casten, & Flickinger, 1993) was completed by mothers and children in order to assess each individual's perception of the amount of social and emotional support received from extended family members. Construct validity of this measure is demonstrated by positive correlations with measures of family routines and informal kinship support (Jones, 2007;Taylor, Seaton, & Dominquez, 2008). The Kinship Support Scale has adequate internal consistency (0.72 -0.86) for African American youth (Hall, Cassidy, & Stevenson, 2008;Jones, 2007;Kenny, Blustein, Chaves, Grossman, Gallagher, 2003;Taylor et al., 1993). Strong internal consistency (α =.88) has been found in a sample of low-income African American mothers (Taylor & Roberts, 1995). The Cronbach alpha coefficients for the current sample are .74 for the children and .89 for the mothers.
The Children's Depression Inventory (CDI, Kovacs, 1992) is a self-report scale of depressive symptoms suitable for youth ranging in age from 7 to 17 years. It has demonstrated good concurrent validity with other measures of depression, cognitive distortions, and self-esteem (Myers & Winters, 2002). The CDI has adequate internal consistency (.82 to .87) for African American youth (Cardemil, Reivich, Beevers, Seligman, & James, 2007;DuRant, Cadenhead, Pendergrast, & Slavens, 1994). The Cronbach alpha coefficient for this measure in the current sample is .83.
The Beck Depression Inventory-II (BDI-II; Beck, Steer, & Brown, 1996) was used to measure the severity of mothers' depressive symptoms in areas such as mood, pessimism, sense of failure, and somatic symptoms. There is strong evidence of the reliability, validity, and utility of the instrument (Dozois, Dobson, & Ahnberg, 1998;Steer, Ball, Ranieri, & Beck, 1999). It has excellent internal consistency (α= .90) with African American samples (Gary & Yarandi, 2004;Grothe et al., 2005). The Cronbach alpha coefficient for this measure in the current sample is .89.
---
Data Analytic Plan
The goals of the analyses were to assess the effect of positive parenting skills (as measured by maternal report on the Parenting Practices Scale) on child depression (as measured by the CDI) and to test whether child social skills (as measured by maternal and child reports on the SSRS) and kinship support (as measured by maternal and child reports on the Kinship Support Scale) moderate that effect. We analyzed maternal depression severity (as measured by the BDI-II) as a covariate, as it is potentially an important variable in the transmission of depression from a mother to her child. Preliminary analyses included descriptive statistics including means and standard deviations, as well as bivariate associations measured with Pearson correlations for all study variables. Two multiple linear regression equations were performed in the primary analyses. The first regression used maternal reports of child social skills and kinship support as the moderating variables. The second regression used child report of their social skills and kinship support as the moderating variables. The positive parenting skills, kinship support and maternal depression severity variables were standardized by calculating z-scores to be used in the regression analyses. The independent variables were entered in three blocks. In the first step, maternal depression severity was entered in a block as a covariate. In the second step, positive parenting skills, child social skills, and kinship support were entered in a block to test for main effects. In the third step, the interaction between positive parenting skills and child social skills and the interaction between positive parenting skills and kinship support were entered in a block to test for moderation effects. Additionally, we conducted post-hoc analyses consisting of two regression analyses separately examining the effects of the two positive parenting skills scales (Positive Parenting and Extent of Involvement) on child depression.
---
Results
---
Descriptive Analyses and Correlations
Means, standard deviations, and Pearson correlations of all study variables are presented in Table 1. The mean of child-reported depression symptoms was within the normative range. Similarly, means of maternal and child reports of child social skills were within the average range. The mean of maternal depression symptoms was in the clinical range, indicating moderate severity of depression in this sample. Maternal depression symptoms were negatively correlated with maternal report of kinship support (r = -.28, p = .02), but positively correlated with child report of kinship support (r = .30, p = .01). Maternal report of positive parenting skills was positively correlated with maternal report of child social skills (r = .42, p < .001) and child report of kinship support (r = .25, p = .03), but was negatively correlated with child depression symptoms (r = -.26, p = .02). Child report of kinship support was also positively correlated with child-reported social skills (r = .39, p = . 001) but negatively correlated with child depression symptoms (r = -.23, p =.04).
---
Regression Analysis using Parent-Report Measures
Table 2 displays the results of the final regression model using parents' reports of the moderators. In the first step, the covariate, maternal depression severity, was not associated with child depression symptoms. In the second step, parent report of child social skills was negatively and significantly associated with child depression symptoms. In the third step, the interaction of positive parenting skills and parent-reported child social skills was significant. To explicate this interaction, separate regression analyses were conducted testing the association between parent report of child social skills and child depression symptoms, using median splits to classify positive parenting skills as low or high. Results showed that higher parent-reported child social skills were associated with lower depression symptoms in children of parents with lower positive parenting skills (B = -0.33, t = -3.23, p = .003); however, the interaction analysis was not significant when positive parenting skills were high. The interaction was plotted in graphical form (Figure 1), displaying positive parenting skills (low and high) and child social skills (low and high). There was no significant interaction between positive parenting skills and parent-rated kinship support.
---
Regression Analysis using Child-Report Measures
Table 3 displays the results for the final regression model using children's reports of the moderators. In the first step, the covariate, maternal depression severity, was not associated with children's depression symptoms. In the second step, none of the main effect variables were significantly associated with children's depression symptoms. In the third step, neither the interaction between positive parenting skills nor the interaction between child-rated social skills and positive parenting skills and child-rated kinship support were associated with children's depression symptoms.
---
Post-Hoc Analyses of Parenting Scales
To further explore the moderation of the relationship between parenting and child depression by child social skills, we conducted separate post-hoc regression analyses for the positive parenting and extent of involvement scales. In each case, the regression analysis included a three-step model with maternal depression severity added in the first step, positive parenting skills and parent report of child social skills added in the second step, and the interaction between positive parenting skills and parent report of child social skills added in the third step. The interactions of both positive parenting and parent report of child social skills (B = 0.10, t = 2.07, p = .042) and extent of involvement and parent report of child social skills (B = 0.17, t = 2.72, p = .008) were significant. To explicate these interactions, separate regression analyses were conducted to test the association between parent report of child social skills and children's depression symptoms using median splits to classify positive parenting as low or high. Similar analyses were conducted using median splits to classify extent of involvement as low or high. For children exposed to low levels of positive parenting, parent report of child social skills was negatively associated with children's depression symptoms (B = -0.33, t = -3.23, p = .003). Similarly, for children exposed to low levels of extent of involvement, parent report of child social skills was negatively associated with children's depression symptoms (B = -0.31, t = -3.20, p = .003).
---
Discussion
The present study examined the interrelations of positive parenting, child social skills and kinship support in determining child depression in a sample of African American children who have mothers with depressive disorders. This is a unique and understudied population that may be vulnerable to a host of mental health difficulties (Boyd, Diamond, & Ten Have, 2011). The findings support factors that protect against the development of depression among this population. Positive parenting practices and child social skills appear to be associated with lower depression symptoms in children, while the impact of kinship support is less clear.
As hypothesized, our results demonstrated a significant interaction effect of parenting and child social skills on child depression. Social skills were negatively associated with child depression symptoms only for those children exposed to poorer parenting skills, suggesting that social skills are a protective factor in these circumstances. There is substantial evidence demonstrating the deleterious effects of negative parenting on child and adolescent behavior (e.g., Goodman, 2007;Lovejoy et al., 2000); however, evidence of social skills weakening this impact is not as well documented. In a study with predominantly African American 2 nd to 6 th graders, negative parenting behavior was no longer associated with higher levels of depression symptoms once children's perceived competence was added into the model (Dallaire et al., 2008). Further research on this topic is needed, as social skills have been identified as a potential protective factor for children experiencing overall adversity (Luthar, Cicchetti, & Becker, 2000) and maternal depression in particular (Beardslee & Podorefsky, 1988). Surprisingly, maternal depression severity was not associated with child depression symptoms. This may be the case because there was a limited range of depression for both the mothers and the children. All the children in the sample have been exposed to significant levels of maternal depression symptoms as demonstrated by moderate clinical level of depressive symptoms on the BDI-II. However, the children's depression scores were in the normative range. Another explanation could involve depression in the context of other adversity. For example, Silk et al. (2007) found that low maternal depression was associated with positive child functioning only for those children who had low to moderate neighborhood risks. This may have occurred in our study as well, given that the majority of the women in the sample were single, low-income mothers. Economic stressors have been found to compound the impact of maternal depression and parenting on child outcomes (Barnett, 2008;Boyd, Diamond, & Bourjolly;Murry, Bynum, Brody, Willert, & Stephens, 2001), however, we cannot determine if this was the case in our study since we did not explicitly assess the conditions of economic stress or neighborhood disorganization. Nonetheless, it is important to recognize that a number of risk and protective factors interact in very complex ways to determine whether children will develop depression (Li et al., 2007;McCarty et al., 2003).
The finding that positive parenting skills were negatively correlated with child depression suggests that positive parenting skills may serve as a protective factor for child depression. Parenting has been identified as a major mechanism in the transmission of depression from a mother to her child (e.g., Goodman, 2007;Goodman & Gotlib, 1999). Much of the maternal depression research has focused on the impact of negative parenting behaviors. Importantly, our findings suggest that positive parenting can be beneficial for families suffering from maternal depression. The results of the current study are in line with other research demonstrating positive parenting to be associated with less depression in youth (Compas et al., 2010;Jones et al., 2002) and to protect against psychological problems among children exposed to interpersonal violence, children in Head Start, and children whose mothers are HIV positive or have AIDS (Graham-Berrmann, Gruber, Howell, & Girz, 2009;Koblinsky, Kuvalanka Randolph, 2006;Murphy, Marelich, Herbeck & Payne, 2009;Riley et al., 2009).
Contrary to the study hypotheses, kinship support was neither significantly related to child depression through main effect nor by interaction with parenting skills. Further examination of this finding using the correlation matrix reveals that maternal depression severity was negatively correlated with maternal report of kinship support, but positively correlated with child report of kinship support. One interpretation of this finding is that the children of mothers with depression in this study were receiving good support from their extended family, even if their mothers did not perceive this to be the case. This is an interesting finding since it contradicts the theory that the increased social isolation resulting from maternal depression can limit the social support available to children (Coyne et al., 1987;Riley et al., 2008). Child report of kinship support was negatively correlated with child depression symptoms, which mirrors the finding for mothers. These results were expected, as there have been several studies showing that weaker social support is associated with greater depression and psychological distress within African American populations (Ceballo & McLoyd, 2002;McKnight-Eily et al., 2009;Thompson et al., 2000). For instance, kinship support has been found to be negatively correlated with adolescent depression symptoms and behavior problems in single-parent households (Hall et al., 2008;Taylor et al., 1993). Also, in a study with both African American and Caucasian mothers with depression, lower satisfaction with support networks was associated with higher rates of internalizing disorders in their children (McCarty et al., 2003).
Given the empirical evidence for the protective role of kinship support in multiple domains, the lack of significant findings for kinship support as a moderator or protective factor against depression in this sample was unexpected. It may be that child social skills are more important than kinship support in protecting children against depression. A possible explanation is that having strong social skills can enable a child to enlist the support they need from adult friends and family given that children in our sample who rated themselves as having good social skills also rated themselves as having good kinship support. This is consistent with Beardslee and Poderfsky's (1998) description of resilient children of parents with depression as possessing characteristics to promote positive interpersonal relationships. Another possible explanation for the lack of findings related to kinship support is that child social skills are more proximal determinants of child depression, while kinship support may be important, but more distal.
There are several limitations to the present study. First, although we were able to achieve statistically significant interaction effects in our regression analyses, the sample size is relatively small. The sample size may limit the power to detect significant associations in the multiple regression analyses, thereby increasing the likelihood of Type II error. Second, without a non-clinical control group, we cannot compare African American children with and without exposure to maternal depression to determine how the interplay of kinship support and social skills may differ. Third, the study does not include child report of the mother's parenting behaviors. There may be differences in how mothers and children evaluate and perceive the mothers' parenting, and depressed mothers may not be the most accurate reporters of their own behavior. Fourth, the cross-sectional design of the study limits our ability to establish the direction of effect among the variables. Finally, the sample was predominantly low-income and thus the results may not generalize to middle-and highincome African American families.
Overall, findings from the current study highlight valuable areas for future research and intervention. Investigation of these protective factors in a longitudinal study with a larger and more economically diverse sample of African American families is needed to confirm these initial findings. Such a study should assess additional processes in the development of depression over time, such as life stressors, exposure to racism and community violence, and biological markers. Furthermore, qualitative research on protective factors for African American families with maternal depression could supplement the quantitative data and could help with hypothesis generation to better understand these processes. Our findings also suggest that improving parenting and child social skills are important elements to include in a preventive intervention programming. Intervention research for families with maternal depression is lacking in general (Boyd & Gillham, 2009), and is especially scarce for African American children whose mother has depression. For example, Compas et al. (2009) tested a cognitive-behavioral family intervention focusing on parenting skills, psychoeducation, and stress coping skills with positive findings, however, only a small number of African Americans were included in the trial. There is a clear need to include more African American families in these preventive interventions, and also to examine cultural adaptations of already empirically-supported interventions to better address the needs of this population.
---
Interaction of parent report of child social skills and positive parenting skills on child depression
| 34,352 | 1,560 |
2d302e7f6e2d4c67bf66a95cefa5e4e40a1fb60d | Sustainable Development of Farmers in Minority Areas after Poverty Alleviation Relocation: Based on an Improved Sustainable Livelihood Analysis Framework | 2,023 | [
"JournalArticle"
] | As an essential regional planning policy, poverty alleviation relocation has a significant impact on the regional economy, environment, and social well-being and is critical for sustainable development. Based on the development of minority areas in Yunnan, this study improves the traditional sustainable livelihood analysis framework and constructed a livelihood capital evaluation system including natural, physical, financial, social, human, and cultural capital. Furthermore, the measurement standard of sustainable livelihoods is proposed, which requires not only the enhancement of livelihood capital but also the coupling and coordinated development of all capital components. Based on the data of Menglai township from 2015 to 2021, this study estimates that farmers' livelihood capital has increased after relocation, and the level of coupling and coordination has improved. Still, it has yet to reach extreme coordination. Hereafter, the theoretical framework of internal and external factors affecting livelihood capital is constructed, and the influencing factors of livelihood capital are obtained through regression analysis. This study provides a new tool for evaluating livelihood capital in minority areas, obtains new findings on the sustainable development of farmers' livelihood capital after poverty alleviation relocation, and expands a new perspective for studying the influencing factors of livelihood capital. | Introduction
Relocation means that farmers leave their original land, which is an effective means to reduce poverty, solve vulnerability, and promote regional development. It profoundly impacts the natural, physical, financial, social, human, and cultural fields, and is a necessary way to achieve sustainable development [1]. The Sustainable Development Goals (SDGs) propose balancing the sustainability of the economy, environment, and society and pursue sustainable development [2]. Poverty eradication is considered the primary goal of sustainable development. As a global problem, although poverty can be measured by income, expenditure, and other dimensions, from the perspective of sustainable development, sustainable livelihood is considered to be the most effective and reasonable way to measure poverty because it can track poverty in multiple dimensions [3]. A sustainable livelihood is the ultimate goal of poverty reduction, which can provide people with comprehensive development programs based on different backgrounds and economic and political conditions [4]. When people face external pressures and shocks, if they can recover, maintain, or even increase their livelihood capital, their livelihood will be sustainable [5,6]. To study livelihood issues, the United Kingdom Department for International Development (DFID) has formulated a sustainable livelihood analysis framework, which is the most widely used and accepted tool for analyzing sustainable livelihood [7,8]. Livelihood capital is the core Land 2023, 12, 1045 2 of 18 and foundation of this framework, including natural, physical, financial, social, and human capital [9]. Promoting livelihood capital will help low-income families escape from poverty, while people with short livelihood capital struggle to get out of the poverty trap. Therefore, improving livelihood capital is vital for all countries, especially developing countries, to eliminate poverty and achieve sustainable development [10]. For farmers, realizing the sustainable development of livelihood capital is the fundamental purpose and significance of the SDGs. On the one hand, the more livelihood capital farmers have, the more able they are to resist risks and the more choices they have. On the other hand, the reasonable structure and allocation of livelihood capital can broaden farmers' livelihood channels and enable farmers to switch different livelihood strategies [11]. Thus, farmers' sustainable livelihood is not only reflected in the increase in the absolute value of livelihood capital but also requires the coupling and coordinated development of various capitals.
Governments worldwide have made several plans to improve the sustainability of people's livelihoods. For developing countries, relocation is considered the most effective way. China has implemented five significant projects of a precision poverty alleviation strategy and ensured the elimination of absolute poverty through five measures: supporting production and employment, poverty alleviation relocation, ecological protection, developing education, and providing minimum living security [12]. Poverty alleviation relocation, as the "first project" of accurate poverty alleviation, aims to realize the sustainable development of relocated farmers, helping farmers move out of areas with a harsh environment and attain lasting development. Since poverty alleviation relocation began, about 35,000 resettlement communities have been built nationwide, and more than 9.6 million poor people have been resettled. The relocated farmers can eliminate the poverty trap by improving infrastructure construction, developing industries, and strengthening education and social security in the resettlement area [13,14]. As the most prominent poverty reduction target country, China has contributed more than 70% of the global poverty reduction population and made remarkable achievements [12,15]. However, the factors that restrict people's development still exist, the risk of returning to poverty has not been eliminated, and poverty governance still has a long way to go [16,17]. In particular, the COVID-19 epidemic has negatively impacted the economy, reduced people's livelihood capital, and hindered the realization of the SDGs [18,19]. In addition, poverty alleviation relocation is not only the migration of the population but also the complicated process of significant changes in the social system, economy, and politics, and the disintegration-reconstruction of farmers' livelihood capital [20,21]. Suppose the relevant departments fail to effectively implement the follow-up integration and assistance work for the relocated farmers. In that case, they will be marginalized, and poverty and inequality will be aggravated, making it challenging to achieve sustainable development, which runs counter to the original intention of the policy [22,23]. In particular, farmers in minority areas have formed unique religious beliefs, living customs, and cultural forms after long-term development. After relocation, they need to adapt to the rapidly changing external environment passively. The original social relations and economic models disintegrate, so it is difficult to reconstruct their national culture and social relations and adapt to the new livelihood model. Thus, the poverty alleviation and sustainable development of farmers in minority areas are even more arduous [3].
This study improved the traditional analysis framework of sustainable livelihoods, combined with the characteristics of minority areas, and added cultural capital to the evaluation system of livelihood capital. Based on the data of Menglai Township in Yunnan Province from 2015 to 2021, it was concluded that the livelihood capital and its coupling and coordination level of farmers have improved after relocation, which meets the requirements of sustainable livelihood development. Finally, the theoretical framework of internal and external factors affecting farmers' livelihood capital was constructed, and the influencing factors of livelihood capital were obtained through empirical analysis. This study can effectively break the development dilemma of livelihood capital after the relocation of farmers in minority areas and help the relocated farmers achieve the goal of sustainable development.
This study has made outstanding contributions to both the theoretical framework and policy practice. First, it provides a new tool for evaluating livelihood capital in minority areas. It improves the DFID's sustainable livelihood analysis framework, constructs the evaluation system of farmers' livelihood capital in minority areas, and further emphasizes the importance of national culture, which provides ideas for future research on livelihood capital according to regional characteristics. Second, it obtains new findings on the sustainable development of farmers' livelihood capital after relocation. Poverty alleviation relocation is a remarkable feat in the history of human migration and poverty reduction worldwide. Evaluating the livelihood capital and its coupling and coordination level of relocated farmers provides a basis for policy implementation and promotes the realization of sustainable development goals. Third, the study expands a new perspective for studying influencing factors of livelihood capital. It constructs the theoretical framework of internal and external factors that affect relocated farmers' livelihood capital, breaks the limitation that the existing research mainly relies on external forces to improve livelihood capital, and realizes the complementarity of endogenous motivation and external assistance.
The remainder of this study is organized as follows. Section 2 introduces the materials and methods. Section 3 lists the measurement results of livelihood capital and its coupling and coordination level, and verifies the internal and external factors affecting livelihood capital through regression analysis. Section 4 presents discussions of this study. The final section summarizes the study.
---
Materials and Methods
Based on the SDGs and sustainable livelihood analysis framework, this study analyzes the livelihood issues of relocated farmers in Menglai Township, Yunnan minority areas, to realize the sustainable development of farmers' livelihood capital. To carry out the research effectively, it is necessary to construct an evaluation system of farmers' livelihood capital in minority areas, which is the basis of any quantitative analysis on livelihood capital, and further measure and compare the stock of livelihood capital and the coupling and coordination level between livelihood capitals before and after relocation. Hereafter, based on theoretical analysis, this study constructs a theoretical framework of internal and external factors affecting the livelihood capital of relocated farmers in Yunnan minority areas and explores the influencing factors of livelihood capital to realize accurate policies and the sustainable development of livelihood capital. The framework and design of the study are shown in Figure 1.
---
Livelihood Capital Evaluation
---
Construction of Livelihood Capital Evaluation System
Farmers' livelihood capital includes natural, physical, financial, social, human, and cultural capital. Natural capital is the natural resources, environmental services, and biodiversity that people enjoy, including all kinds of land, forests, wildlife, and water resources [4]. For poor farmers, natural capital is the basis of their productive activities and is most closely associated with livelihood vulnerability [24], in which land is the most significant capital [11,25]. The primary function of physical capital is to meet the basic needs of farmers and improve their productivity, including safe housing, vehicles, roads, transportation, and production equipment and tools. Financial capital usually refers to the funds raised or controlled by people to achieve their livelihood goals, including relief, lending, savings, and income. For farmers, the most crucial financial capital is their income. The richer the sources of income, the more they can accumulate financial capital. Social capital is embodied in the participation of social groups, social contact, social trust, and public health support [26,27]. The level of farmers' social capital is greatly influenced by the quality and scale of the social network, and it will also affect the realization of the functions of the rest of the livelihood capital. Through people's interaction, social capital can bring farmers more resources and social support [28]. Human capital usually exists in the form of skills, health, and education [29]. On the one hand, the external manifestation of poverty can be reflected in the lack of human capital; on the other hand, the lack of human capital will further lead to poverty. Cultural capital is the element that best reflects regional characteristics, including norms, values, rules, indigenous customs, traditional knowledge, and activities [30]. Cultural factors often impact farmers' agricultural practices, production and consumption patterns, family decisions, and attitudes toward new agricultural technologies [31][32][33]. Thus, for farmers in minority areas, cultural capital, like the other five capitals, greatly influences farmers' livelihood strategies and results. As shown in Figure 2, this study comprehensively summarizes the relevant literature and combines the characteristics of minority areas to build an evaluation system of farmers' livelihood capital in Yunnan minority areas based on the principles of scientificity and objectivity, comprehensiveness and representativeness, comparability and operability. 2, this study comprehensively summarizes the relevant literature and combines the characteristics of minority areas to build an evaluation system of farmers' livelihood capital in Yunnan minority areas based on the principles of scientificity and objectivity, comprehensiveness and representativeness, comparability and operability.
---
Measurement of Livelihood Capital
Based on the evaluation system of livelihood capital constructed above, the weight of each index was obtained by using the global entropy method, and the comprehensive evaluation value of livelihood capital was calculated, which avoids the interference of people's subjective factors and fully considers the characteristics of three-dimensional spatiotemporal data composed of farmers, indicators and time [34,35]. The specific steps are as follows:
First, a global evaluation matrix is constructed to evaluate m farmers' livelihood capital in t years with n indicators.
---
Measurement of Livelihood Capital
Based on the evaluation system of livelihood capital constructed above, the weight of each index was obtained by using the global entropy method, and the comprehensive evaluation value of livelihood capital was calculated, which avoids the interference of people's subjective factors and fully considers the characteristics of three-dimensional spatio-temporal data composed of farmers, indicators and time [34,35]. The specific steps are as follows:
First, a global evaluation matrix is constructed to evaluate m farmers' livelihood capital in t years with n indicators.
X = X 1 11 . . . X 1 1n . . . . . . . . . X t m1 . . . X t mn (1)
Second, the range method standardizes the data to eliminate differences [36].
Land 2023, 12, 1045 6 of 18
If the indicator is positive,
X ij = X ij -minX ij maxX ij -min X ij × 0.9 + 0.1, (1 ≤ i ≤ mt, j = 1, 2, 3 . . . 17) (2)
If the indicator is negative,
X ij = maxX ij -X ij maxX ij -minX ij × 0.9 + 0.1, (1
≤ i ≤ mt, j = 1, 2, 3 . . . 17)(3)
Third, the weight of each index is calculated.
w j = 1 -(-k∑ mt i=1 X ij ∑ mt i=1 X ij ln X ij ∑ mt i=1 X ij ) ∑ 17 j=1 1 -(-k∑ mt i=1 X ij ∑ mt i=1 X ij ln X ij ∑ mt i=1 X ij ) , (k = 1 lnmt )(4)
Fourth, the comprehensive evaluation value of livelihood capital is calculated.
LC = n ∑ j=1 W j X ij(5)
---
Measurement of Coupling Coordination Level
More importantly, the sustainable development of livelihood capital is not only manifested in the increase in its absolute value but also in the improvement in the level of coupling and coordination among various capitals.
(1) Coupling degree model "Coupling" refers to the interaction and influence between several systems. The coupling degree describes the degree of interaction, and the benign coupling is measured by the coordination degree. The higher the level of coupling and coordination, the more harmonious and orderly the development of each subsystem [37].
The calculation formula of the coupling degree of multiple systems is as follows:
C n = µ 1 × µ 2 × . . . × µ n (µ 1 + µ 2 + . . . + µ n ) n 1 n (6)
where µ i (i = 1, 2, . . . , n) is the comprehensive evaluation function of each subsystem, and the number of subsystems in this study is n = 6, so the coupling level of six kinds of livelihood capital is:
C = NC × PC × FC × SC × HC × CC [(NC + PC + FC + SC + HC + CC)/6] 6 1 6 (7)
where C is the coupling degree of six capitals; NC, PC, FC, SC, HC, and CC represent the evaluation of six subsystems, that is, natural, physical, financial, social, human, and cultural capital values, respectively.
(2) Coupling and coordination model The coupling degree can only reflect the level of interaction between subsystems and cannot obtain their coordination degree. The coupling coordination degree can comprehensively consider the two dimensions of "development" and "coordination" between systems, and the formula is as follows: where C is the coupling degree between capitals, T is the total amount of livelihood capital, D is the degree of coupling and coordination among the six capitals, and its level and classification are shown in Table 1 [38]. The theory of internal and external factors suggests that, in the process of the development and change of a subject, external and internal factors complement each other and are indispensable, which jointly affect the evolution and development of the subject. A comprehensive consideration of the internal and external factors that affect the subject is conducive to determining their respective correlations, interactions, and possible complementary or substitutive relationships to realize an in-depth analysis of the subject [39].
D = √ C × T (8)
Thus, the characteristics of farmers are the essential factors that affect their livelihood capital after poverty alleviation relocation, which determines the primary trend and subjective initiative of livelihood capital development. Moreover, the change in environment, as an external factor that affects livelihood capital, is an indispensable condition to realize an improvement in livelihood capital. If farmers only rely on external forces and ignore the critical role of internal factors, they will strengthen their dependence and reduce their initiative. On the contrary, improving their livelihood capital will be challenging if they only focus on internal factors and lack external help. Therefore, farmers can form a complementary mechanism of internal self-development and practical external assistance by fully considering the internal and external factors affecting livelihood capital.
In terms of internal factors affecting farmers' livelihood capital. The family life cycle theory describes the process of a family from emergence, development, and maturity to extinction [40]. The characteristics of the farmers' family population will change with the different family life cycles, affecting the family's livelihood strategy and livelihood capital [41,42].
In terms of external factors affecting farmers' livelihood capital, location theory integrates human activities and space and puts forward those areas with abundant cultivated land resources, low transportation costs, and convenient transportation, which are more conducive to the development of farmers, providing a scientific basis for poverty alleviation relocation [43]. Therefore, geographical location is the most basic external feature of farmers, and the advantages and disadvantages of location conditions determine the development foundation and conditions of farmers, which play a decisive role in the sustainable development of farmers. At the same time, with the gradual improvement in the theory of sustainable development and the increasing demand for tourism, the sustainable development theory of tourism poverty alleviation has risen rapidly. The theory puts forward that by developing tourism, the natural, economic, social, and cultural fields will be fully developed, thus reducing or eliminating the poverty of local farmers. In addition, relocation can promote farmers to achieve sustainable livelihood by creating employment opportunities and increasing income [44,45]. This theory provides an action guide for the sustainable development of the livelihood capital of relocated farmers.
The cumulative causation theory believes that in a developing society, the change in one factor will make other factors change accordingly, further strengthening this factor and eventually forming a circular development model of self-strengthening and accumulation [46]. The causes of poverty often play a leading role in the sustainable livelihood of farmers. With the development of the economy and society, farmers will further aggravate this poverty phenomenon because of their poverty-causing factors. On the contrary, if farmers have some development advantages from the beginning, they will realize sustainable development based on their existing advantages. The causes of poverty include not only external factors such as water shortage, land shortage, and backward traffic conditions, but also internal factors such as lack of self-development motivation, disability, and illness, which are the primary concerns of sustainable livelihood.
Based on the above analysis, the theoretical framework of internal and external factors affecting the livelihood capital of relocated farmers in Yunnan minority areas was constructed, as shown in Figure 3.
ward that by developing tourism, the natural, economic, social, and cultural fields will be fully developed, thus reducing or eliminating the poverty of local farmers. In addition, relocation can promote farmers to achieve sustainable livelihood by creating employment opportunities and increasing income [44,45]. This theory provides an action guide for the sustainable development of the livelihood capital of relocated farmers.
The cumulative causation theory believes that in a developing society, the change in one factor will make other factors change accordingly, further strengthening this factor and eventually forming a circular development model of self-strengthening and accumulation [46]. The causes of poverty often play a leading role in the sustainable livelihood of farmers. With the development of the economy and society, farmers will further aggravate this poverty phenomenon because of their poverty-causing factors. On the contrary, if farmers have some development advantages from the beginning, they will realize sustainable development based on their existing advantages. The causes of poverty include not only external factors such as water shortage, land shortage, and backward traffic conditions, but also internal factors such as lack of self-development motivation, disability, and illness, which are the primary concerns of sustainable livelihood.
Based on the above analysis, the theoretical framework of internal and external factors affecting the livelihood capital of relocated farmers in Yunnan minority areas was constructed, as shown in Figure 3.
---
Variables and Data
The study area is Menglai Township, Cangyuan Wa Autonomous County, Yunnan Province. The township is dominated by Wa nationality, and its ethnic structure is complex and diverse. It is a typical representative of minority areas because of its relatively high altitude difference and harsh natural environment. Since the "Thirteenth Five-Year Plan", Menglai Township has implemented the poverty alleviation relocation project, and the relocated farmers have eliminated poverty. There are seven resettlement sites in Menglai Township, namely: Haibie resettlement site in Manlai Village, Mangmajie resettlement site in Menglai Village, Gonggaji resettlement site in Yong'an Village, Yonggongchadi resettlement site in Gongnong Village, Gongbobo resettlement site in Dinglai Village, Gongyalong resettlement site in Banlie Village, and Gongwang resettlement site in Banlie Village, involving 324 households with 1265 people. The data in this study were obtained from the continuous and in-depth field investigation in Menglai Township, Cangyuan County, from 2015 to 2021. Moreover, we referred to the Statistical Bulletin of National Economic and Social Development, the Yearbook of Lincang, the Yearbook of
---
Variables and Data
The study area is Menglai Township, Cangyuan Wa Autonomous County, Yunnan Province. The township is dominated by Wa nationality, and its ethnic structure is complex and diverse. It is a typical representative of minority areas because of its relatively high altitude difference and harsh natural environment. Since the "Thirteenth Five-Year Plan", Menglai Township has implemented the poverty alleviation relocation project, and the relocated farmers have eliminated poverty. There are seven resettlement sites in Menglai Township, namely: Haibie resettlement site in Manlai Village, Mangmajie resettlement site in Menglai Village, Gonggaji resettlement site in Yong'an Village, Yonggongchadi resettlement site in Gongnong Village, Gongbobo resettlement site in Dinglai Village, Gongyalong resettlement site in Banlie Village, and Gongwang resettlement site in Banlie Village, involving 324 households with 1265 people. The data in this study were obtained from the continuous and in-depth field investigation in Menglai Township, Cangyuan County, from 2015 to 2021. Moreover, we referred to the Statistical Bulletin of National Economic and Social Development, the Yearbook of Lincang, the Yearbook of Cangyuan Wa Autonomous County, and related government documents in Cangyuan County from 2015 to 2021 to provide a good database for this study.
Taking the calculated livelihood capital value as the explained variable, based on theoretical analysis, the number of domestic and foreign tourists, family population, administrative villages (the administrative village was assigned to the farmers in Menglai Village as 1, Yongan Village as 2, Yingge Village as 3, Minliang Village as 4, Manlai Village as 5, Gongnong Village as 6, Gongsa Village as 7, Dinglai Village as 8, and Banlie Village as 9), and causes of poverty (the causes of poverty were divided into capacity loss, increased burden, factor shortage, accidental impact, and lack of self-development motivation, and they are assigned 1 to 5, respectively) were selected as the explanatory variables to explore their influence on the livelihood capital of relocated farmers. Based on the data on livelihood capital and its influencing factors of 144 relocated farmers in Menglai Township from 2015 to 2021, the descriptive statistics of each variable are listed in Table 2. Before empirical analysis, the multicollinearity needs to be tested first. If the explanatory variables have multiple collinearities, it will lead to pseudo-regression and estimation bias. Thus, the variance inflation factor (VIF) is used to test the multicollinearity problem to improve the accuracy of regression results. The greater the VIF, the more serious the collinearity problem is. The results of the multicollinearity test are shown in Table 3. It can be seen that the maximum VIF is 1.03, the VIF value of each variable is far less than 10, and the average value of VIF is far less than 5; that is, there is no multicollinearity among the influencing factors selected in the study, which meets the requirements of data analysis [47].
---
Model Construction and Regression Method
To explore the influence of the number of domestic and foreign tourists, family population, administrative villages, and causes of poverty on various capitals and livelihood capital, a regression model is constructed as follows:
capital it = α 0 + α 1 tourist it + α 2 population it + α 3 village it + α 4 cause it + ε it (9
)
where i is the farmer, t is the year, capital it represents the farmer's natural, physical, financial, social, human, cultural, and livelihood capital values, tourist it represents the number of do-mestic and foreign tourists, population it represents the family population, village it represents the administrative village to which the farmers belong, and cause it reflects the causes of poverty of the farmers. α 0 is a constant term, and ε i,t is a random error term. An F-test, LM-test, and Hausman test were used to determine the regression method of the model, and the statistical test results are shown in Table 4. First, the p value of the F-test is 0.0000, which is significantly less than 0.05, rejecting the original assumption that the mixed regression model is better than the fixed effect model; that is, it is necessary to choose the fixed effect regression model. Second, the p value of LM-test is less than 0.05, which rejects the original hypothesis that the mixed regression model is better than the random effect model, indicating that the random effect model is better. Finally, the original hypothesis of the Hausman test is that the random effect model is superior to the fixed effect model, and the p value of the Hausman test is 0.9968, which is significantly greater than 0.05, indicating that the original hypothesis is accepted; that is, the random effect model is superior to the fixed effect model. Therefore, to make the analysis results more realistic and reasonable, it is necessary to use a random effect model for regression.
---
Results
---
Measurement of Livelihood Capital
The livelihood capital of relocated farmers from 2015 to 2021 is shown in Figure 4. Before the relocation, farmers' livelihood capital increased slightly in 2015-2016, and the change was not noticeable. In 2017-2018, with the acceleration of poverty alleviation relocation and the improvement in various support policies, the livelihood capital of farmers increased significantly, reaching a maximum of 0.6451 in 2019 after relocation. Meanwhile, after the relocation was fully completed, various subsidy policies were weakened. Moreover, affected by the COVID-19, farmers' livelihood capital declined slightly in 2020-2021, but it was still greatly improved compared with the livelihood capital before the relocation.
---
OR PEER REVIEW
11 of 1 the average social capital was 0.1688, which became the capital with the most significan increase. In terms of human capital, farmers' human capital before the relocation wa 0.0237 and 0.0284, respectively. After relocation, the average human capital was 0.1292 and farmers' knowledge and skills were improved. In terms of cultural capital, farmers cultural capital in 2015 and 2016 was 0.0167 and 0.0307, respectively. After the relocatio was completed, that is, in 2019-2021, the average cultural capital was 0.0795. By carryin out various cultural activities to enhance the local cultural attraction, the cohesion of farm ers has been continuously improved, and cultural activities have been further transformed into productive forces, which have become a source of vitality for promoting the sustain able development of farmers' livelihood capital. Specifically, the distribution of farmers' livelihood capital from 2015 to 2021 is shown in Figure 5. It can be found that all kinds of livelihood capital improved and developed steadily. In terms of natural capital, farmers' natural capital was 0.0232 and 0.0257 in 2015 and 2016, respectively. After the relocation, the natural capital increased and the average value was 0.0422. In terms of physical capital, farmers' physical capital before the relocation was 0.0313 and 0.0477, respectively, and the average value of physical capital after relocation was 0.1775, which improved the safety and convenience of farmers' production and life. In terms of financial capital, farmers' financial capital in 2015 and 2016 was 0.0205 and 0.0227, respectively. After the relocation, the average financial capital was 0.0322, farmers had more opportunities to increase their income and obtain employment, and their income sources were more stable and diversified. In terms of social capital, farmers' social capital in 2015 and 2016 was 0.0225 and 0.0295, respectively. After the relocation, the average social capital was 0.1688, which became the capital with the most significant increase. In terms of human capital, farmers' human capital before the relocation was 0.0237 and 0.0284, respectively. After relocation, the average human capital was 0.1292, and farmers' knowledge and skills were improved. In terms of cultural capital, farmers' cultural capital in 2015 and 2016 was 0.0167 and 0.0307, respectively. After the relocation was completed, that is, in 2019-2021, the average cultural capital was 0.0795. By carrying out various cultural activities to enhance the local cultural attraction, the cohesion of farmers has been continuously improved, and cultural activities have been further transformed into productive forces, which have become a source of vitality for promoting the sustainable development of farmers' livelihood capital.
into productive forces, which have become a source of vitality for promoting the sustainable development of farmers' livelihood capital.
---
Coupling and Coordination Level of Livelihood Capital
---
Coupling and Coordination Level of Livelihood Capital
Figure 6 describes the coupling and coordination level of various capitals of relocated farmers in Menglai Township from 2015 to 2021. Before the relocation, farmers' capital was on the verge of imminent imbalance. With the promotion of poverty alleviation relocation, the coupling and coordination level of farmers' livelihood capital was significantly improved by implementing comprehensive support policies. After the relocation, from 2019 to 2021, farmers' livelihood capital was upgraded to a moderately coordinated state. Although the coupling and coordination level has been significantly improved, the six capitals have yet to reach an extremely coordinated state due to the differences in the initial level and growth rate of each capital. It is necessary to promote the coupled and coordinated development of various capitals, which is not only conducive to the increase in livelihood capital but can also break the barriers of transformation among various capitals and promote the sustainable development of livelihood capital.
was on the verge of imminent imbalance. With the promotion of poverty alleviation rel cation, the coupling and coordination level of farmers' livelihood capital was significant improved by implementing comprehensive support policies. After the relocation, fro 2019 to 2021, farmers' livelihood capital was upgraded to a moderately coordinated stat Although the coupling and coordination level has been significantly improved, the s capitals have yet to reach an extremely coordinated state due to the differences in the in tial level and growth rate of each capital. It is necessary to promote the coupled and coo dinated development of various capitals, which is not only conducive to the increase livelihood capital but can also break the barriers of transformation among various capita and promote the sustainable development of livelihood capital.
---
Influencing Factors of Livelihood Capital
---
Regression Result
Based on the random effect model, the effects of various factors on the livelihoo capital and total capital of relocated farmers in minority areas are verified, and the resul are shown in Table 5.
Table 5. Regression result. (1)(2) (3) (4) (5) (6) (7)
---
Influencing Factors of Livelihood Capital
---
Regression Result
Based on the random effect model, the effects of various factors on the livelihood capital and total capital of relocated farmers in minority areas are verified, and the results are shown in Table 5. (1) Number of domestic and foreign tourists The regression results show that when the number of domestic and foreign tourists increases by one percentage point, farmers' natural, physical, financial, social, human, and cultural livelihood capital increase by 0.0273, 0.2191, 0.0191, 0.2311, 0.1388, 0.0207, and 0.7486 percentage points, respectively, at the significance level of 1%, which shows that tourism promotes the livelihood capital of farmers. Among them, the growth of tourists has the most obvious influence on social capital, and its promotion of physical, human, natural, cultural, and financial capital is weakened in turn. Farmers' social networks can be expanded by vigorously developing tourism, and they can obtain more social support and a sense of belonging and satisfaction. In addition, through skills training and "driven by capable people", farmers' labor skills are enriched, and their human capital is improved. Moreover, with tourism development in minority areas, various cultural tourism products with ethnic characteristics have appeared. Farmers' awareness of environmental protection and "Lucid waters and lush mountains are invaluable assets" has deepened, gradually promoting cultural and natural capital. For farmers, with the increase in the number of domestic and foreign tourists, the most intuitive change is reflected in the improvement in farmers' income and basic living security, that is, the growth of financial capital and physical capital and the development of tourism has improved farmers' quality of life and living standards. Finally, farmers' livelihood capital can be improved by accumulating human, physical, and financial resources that are conducive to development.
Table 5. Regression result. (1)(2) (3) (4) (5) (6) (7)
(2) Family population The regression coefficient of the influence of family population on natural capital is 0.0016 at the level of 1% significance, and the family population will influence the promotion of natural capital. Specifically, if every unit of the family population increases, the natural capital of farmers will increase by 0.16%. Furthermore, the influence of family population on farmers' financial capital is significant at the level of 1%. Every unit of family population increases, farmers' financial capital increases by 0.17%, and family size positively impacts farmers' income growth.
(3) Administrative villages The regression coefficient of administrative villages to natural capital is 0.0005 at the level of 5% significance, and the regression coefficients to physical, financial, and social capital are -0.0018, -0.0006, and -0.0024 at the level of 1% significance, respectively. Thus, the development of livelihood capital expressed by farmers in different administrative villages is quite different. There are often significant differences in geographical conditions, infrastructure, road traffic conditions, economic development level, and social relations of farmers in different administrative villages, which further affect farmers' livelihood capital.
(4) Causes of poverty The regression coefficients of the causes of poverty to financial and social capital are -0.0010 and 0.0031 at the level of 1% significance, respectively, indicating that farmers with capacity loss, increased burden, factor shortage, accidental impact, and lack of self-development motivation have different performances in financial and social capital. Therefore, to improve farmers' livelihood capital and realize sustainable livelihood, it is necessary to attach importance to the orderly connection between various policies and poverty alleviation relocation and implement differentiated assistance and development measures for farmers with different causes of poverty.
---
Robustness Test (1) Replacement matching method
To verify the robustness of the research conclusion, OLS and FE estimation methods are used to verify the influence of the number of domestic and foreign tourists, family population, administrative villages, and the causes of poverty on farmers' livelihood capital. The regression results are shown in Table 6, showing that the significance and direction of most variable coefficients are stable. The research conclusions are consistent with the benchmark regression results, indicating that the empirical analysis results are robust. 7. It can be seen that although the regression coefficients of various influencing factors are different in absolute values, the sign and significance level of the coefficients remain unchanged, which further proves that the benchmark regression results are robust.
---
Discussion
Since the concept of sustainable livelihood was put forward, it has become the core issue of poverty and sustainable development research, which focuses on ability, fairness, and sustainability [30]. Livelihood capital is the core of sustainable livelihood, and scholars have made functional explorations and summaries in the evaluation and promotion of livelihood capital and the study of livelihood capital in specific events.
Most of the existing studies evaluate livelihood capital from five aspects: nature, physical, financial, social, and human capital, according to DFID's sustainable livelihood analysis framework [48]. Many studies promote the development of livelihood capital through the intervention of external factors and seldom explore the impact of farmers' factors on livelihood capital [49][50][51][52][53][54]. Moreover, the research on livelihood capital in specific events focuses on climate change [55][56][57]. However, the existing research rarely investigates the influence of relocation on farmers' livelihood [12], especially the research on farmers' livelihood capital after poverty alleviation relocation in minority areas, and insufficient attention is paid to cultural capital in minority areas. However, due to the particularity of social history, cultural traditions, and living customs, minority areas need to fully consider and respect local characteristics and development laws, choose development methods based on local conditions, take into account the internal and external influencing factors of livelihood capital, and promote the stock improvement in livelihood capital and the coordinated development of various capitals.
Ecological, economic, and social factors such as natural disasters, environmental pollution, climate change, land tenure deterioration, lack of rural employment opportunities, lack of educational resources, and inadequate health and social welfare are the leading causes of the relocation of farmers. Based on the factors affecting farmers' livelihood capital in this study, to improve the livelihood capital of relocated farmers, they can be organized to move to areas with tourist resources and increase their income by developing homestays and rural tourism. Eugenics and childcare should be promoted, family members' education and employment levels should be improved, and their self-development ability should be enhanced. In addition, they can strengthen cooperation between different administrative villages, jointly carry out planting and breeding projects, share resources, and improve production efficiency. Furthermore, government departments need to deeply understand the causes of farmers' poverty and formulate specific assistance programs. For example, they can encourage young people in impoverished households to start businesses in their hometowns if the family is impoverished due to a lack of labor.
This study has great theoretical and practical significance for academic research and policymaking. On the one hand, by supplementing cultural capital, the original sustainable analysis framework is improved, which provides a scientific theoretical reference for the study of sustainable livelihood issues. Furthermore, the theoretical framework of internal and external factors affecting livelihood capital is constructed, making it possible to pay attention not only to the importance of external assistance but also to farmers' characteristics and endogenous motivation. On the other hand, this study is conducive to the relevant departments to realize that farmers need not only physical and economic support but also cultural integration after relocation to continuously enrich cultural support carriers, build cultural facilities, enrich national cultural activities, and meet the diverse cultural needs of relocated farmers. Moreover, the internal and external factors that affect farmers' livelihood capital are comprehensively considered, and the relocated farmers are given specific policies based on different influencing factors.
The limitation of this study is that only one area was taken as an example for field investigation and empirical analysis, and whether the index system and empirical research results are suitable for farmers in other minority areas remains to be discussed. In the future, it will be necessary to expand the research area further and increase the comparative analysis of different regions to enhance the universality of the research conclusions.
---
Conclusions
As the "first project" in the battle against poverty, poverty alleviation relocation is the most effective way to alleviate poverty for farmers in regions where "one's soil and water cannot support one's people". It is also a great feat in the history of human migration and world poverty reduction and an essential part of the "China Plan" for poverty alleviation in the new era. As the main battlefield of poverty alleviation, Yunnan Province integrates frontier, ethnic, mountainous, and poverty. To further consolidate poverty alleviation achievements and enhance the livelihood capital of relocated farmers, this study takes the relocated farmers in Menglai Township, Cangyuan County, Yunnan Province, from 2015 to 2021 as the research object, evaluates the livelihood capital of the farmers, and explores the influencing factors of livelihood capital to provide decision support for the sustainable development of the livelihood capital of the relocated farmers, promote the effective connection between the poverty alleviation achievements and the rural revitalization strategy, prevent the farmers from returning to poverty, and realize the sustainable development goal. The main research contents and conclusions are as follows:
(1) Construct a livelihood capital evaluation system for farmers in Yunnan minority areas. The evaluation system of farmers' livelihood capital includes 17 indexes, including four third-level indexes of natural capital, three third-level indexes of physical capital, four third-level indexes of financial capital, two third-level indexes of social capital, two third-level indexes of human capital, and two third-level indexes of cultural capital.
(2) Measure the value of livelihood capital and its coupling and coordination level. The livelihood capital and all kinds of farmers' capital have increased significantly after relocation, and the level of coupling and coordination among the six types of capital has been improved. However, there is still a significant gap in the level of extreme coordination.
(3) Construct the theoretical framework of internal and external factors affecting the livelihood capital of relocated farmers. Integrating the internal and external factors theory, family life cycle theory, location theory, sustainable development theory of tourism poverty alleviation, and the cumulative causation theory, the empirical analysis shows that the number of domestic and foreign tourists, family population, administrative villages, and causes of poverty have different degrees of influence on farmers' livelihood capital.
---
Data Availability Statement: Not applicable.
---
Conflicts of Interest:
The authors declare no conflict of interest.
---
Author Contributions: Conceptualization, J.W. and H.Y.; methodology, J.W.; software, J.W.; validation, J.W., H.Y. and J.Z.; formal analysis, J.Z.; investigation, J.W.; resources, H.Y.; data curation, H.Y.; writing-original draft preparation, J.W.; writing-review and editing, J.W.; visualization, J.W.; supervision, J.Z.; project administration, H.Y.; funding acquisition, H.Y. All authors have read and agreed to the published version of the manuscript. | 46,624 | 1,434 |
4c3c6b0f5577ab7969eaf4763fd6b5e9fc4edb66 | Health decision-making capacity and modern contraceptive utilization among sexually active women: Evidence from the 2014–2015 Chad Demographic and Health Survey | 2,022 | [
"JournalArticle",
"Review"
] | Background Globally, there has been an increase in the percentage of women in their reproductive ages who need modern contraceptives for family planning. However, in Chad, use of modern contraceptive is still low (with prevalence of 7.7%) and this may be attributable to the annual increase in growth rate by 3.5%. Social, cultural, and religious norms have been identified to influence the decision-making abilities of women in sub-Saharan Africa concerning the use of modern contraceptives. The main aim of the study is to assess the association between the health decision-making capacities of women in Chad and the use of modern contraceptives.The 2014-2015 Chad Demographic and Health Survey data involving women aged 15-49 were used for this study. A total of 4,113 women who were in sexual union with information on decision making, contraceptive use and other sociodemographic factors like age, education level, employment status, place of residence, wealth index, marital status, age at first sex, and parity were included in the study. Descriptive analysis and logistic regression were performed using STATA version 13.The prevalence of modern contraceptive use was 5.7%. Women who take health decisions with someone are more likely to use modern contraceptives than those who do not (aOR = 2.71; 95% CI = 1.41, 5.21). Education, ability to refuse sex and employment status were found to be associated with the use of modern contraceptives. Whereas those who reside in rural settings are less likely to use modern contraceptives, those who have at least primary education are more likely to use modern contraceptives. Neither age, marital status, nor first age at sex was found to be associated with the use of modern contraceptives.Education of Chad women in reproductive age on the importance of the use of contraceptives will go a long way to foster the use of these. This is because the study has shown that when women make decisions with others, they are more likely to opt for the use of modern contraceptives and so a well-informed society will most likely have increased prevalence of modern contraceptive use. | Background
Women in their reproductive age with the need for family planning satisfied by modern contraceptive method has seen a steady increase globally, from 73.6% in the year 2000 to 76.8% in the year 2020 [1,2]. Some reasons ascribed to this mild change include limited access to services, as well as cultural and religious factors [3]). However, these barriers are being addressed in some regions, and this accounts for an increase in demand for modern methods of contraception [2].
According to the World Health Organization, the proportion of women with needs for modern methods of contraception has been stagnant at 77% from 2015 to 2020 [2]. Globally, the number of women using modern contraceptive methods has increased from 663 million in 2000 to 851 million in 2020 [2]. In 2030, it is projected that an additional 70 million women will be using a modern contraceptive method [2]. In low to middle-income countries, the 214 million women who wanted to avoid pregnancy were not using any method of contraception as of 2020 [2]. Low levels of contraceptive use have mortality and clinical implications [3]. However, about 51 million in their reproductive age have an unmet need for modern contraception [1]. Maternal death and new born mortality could be reduced from 308,000 to 84,000 and 2.7 million to 538,000 respectively, if women with intentions to avoid pregnancy were provided with modern contraceptives [4].
Low prevalence in the use of modern contraceptives has been linked to negative events such as maternal mortality and unsafe abortion in Africa [5][6][7]. Women with low fertility intentions in sub-Saharan Africa, record the lowest prevalence rate of modern contraceptive use [8].
The use of modern contraceptive remains a pragmatic and cost-effective public health intervention for reducing maternal mortality, averting unintended pregnancy and controlling rapid population growth especially in developing countries [3,9]. Beson, Appiah & Adomah-Afari, [3] highlight that knowledge and awareness per se do not result in the utilization of modern contraceptives [3]. Cultural and religious myths and misconceptions tend to undermine the use of modern contraception [10][11][12]. Ensuring access and utilization of contraceptives has benefits extending beyond just the health of the population [3]. Amongst these include sustainable population growth, economic development and women empowerment [2,3].
Nonetheless. predominantly in SSA, women do not have the decision-making capacity to make decisions pertaining to their health [13]. Although this has proven to be an efficient driver for improved reproductive health outcomes for women [14,15]. To improve efforts on contraceptive usage in Africa, people are encouraged to make positive reproductive health decisions to prevent unintended pregnancy and other sexually transmitted infections since these steps would lead to the reduction of maternal mortality and early childbirth amongst women [14].
At an estimated population growth of 3.5% per year, Chad's population growth is considered to be growing at a relatively fast pace [16]. This trend in growth may be ascribed to the country's high fertility and low use of contraceptives [16]. In Africa, Chad had been found to have the lowest prevalence of modern contraceptive use in sub-Saharan Africa despite its recorded growth from 5.7% to 2015 to 7.7% in 2019 [8,17]. UN Women [18] data also showed that a lot need to be done in Chad to achieve gender equality with about 6 out of 10 women aged 20-24 years married before age 18 and about 165 of women in their reproductive age reporting being victims to physical and/or sexual violence by a current or former intimate partner in the year 2018. Studies indicate that social and religious norms have undermined women's rights and self-determination in Chad [19][20][21] which Plain language summary The use of modern contraceptives remains a pragmatic and cost-effective public health intervention for reducing maternal mortality, averting unintended pregnancy and controlling of rapid population growth, especially in developing countries. Although there has been an increase in the utilization of modern contraceptives globally, it is still low in Chad with a prevalence rate of 7.7%. This study assessed the association between the health decisionmaking capacities of women in Chad and the use of modern contraceptives. We used data from the 2014 -2015 Chad Demographic and Health Survey. Our study involved 4,113 women who were in sexual union and with complete data on all variables of interest. We found the prevalence of modern contraceptive utilization at 5.7%. Level of education of women, women who can refuse sex and employment status were found to be significantly associated with the use of modern contraceptives. Whereas those who reside in rural settings are less likely to use modern contraceptives, those who have at least primary education are more likely to use modern contraceptives. Our study contributes to the efforts being made to increase the utilisation of modern contraceptives. There is a need to step up contraceptive education and improve adherence among Chad women in their reproductive years. In the development of interventions aiming at promoting contraceptive use, significant others such as partners and persons who make health decisions with or on behalf of women must be targeted as well.
Keywords Women, Chad, Modern contraception, Reproductive health, Demographic and Health Survey affects their health decision-making capacity negatively. This situation makes it challenging for women in their reproductive age in Chad to be of independent mind when making decisions about the use of modern contraception. Though the use of contraception in many parts of the world had been found to yield immense benefits such as low levels of maternal mortality and morbidity, and to a larger extent influence economic growth and development [2,3]. It has become necessary to investigate the health decision-making capacity and modern contraceptive utilisation among sexually active women in Chad. Findings from this study will provide stakeholders and decision-makers with evidence that will guide policymaking to improve access and utilisation of modern contraceptives in Chad.
---
Materials and methods
---
Data source
The study used data from the current Demographic and Health Survey (DHS) conducted in Chad 2104 -2015. The 2014-2015 Chad Demographic and Health Survey (CDHS) aimed at providing current estimates of basic demographic and health indicators. It captured information on health decision making, fertility, awareness, and utilization of family planning methods, unintended pregnancy, contraceptive use, skilled birth attendance, and other essential maternal and child health indicators [22].
The survey targeted women aged 15-49 years. The study used DHS data to provide holistic and in-depth evidence of the relationship between health decision-making and the use of modern contraceptives in Chad. DHS is a nationwide survey collected every five-year period across low-and middle-income countries. A stratified dual-stage sampling approach was employed. Selection of clusters (i.e., enumeration areas [EAs]) was the first step in the sampling process, followed by systematic household sampling within the selected EAs. For the purpose of this study, only women (15-49 years) in sexual unions (marriage and cohabitation) who had complete cases on all the variables of interest were used. The total sample for the study was 4,113.
---
Study variables
---
Dependent variable
The dependent variable in this study was "contraceptive use" which was derived from the 'current contraceptive method' . The responses were coded 0 = "No method", 1 = "folkloric method", 2 = "traditional method, " and 3 = "modern method". The existing DHS variable excluded women who were pregnant and those who had never had sex. The modern methods included female sterilization, intrauterine contraceptive device (IUD), contraceptive injection, contraceptive implants (Norplant), contraceptive pill, condoms, emergency contraception, standard day method (SDM), vaginal methods (foam, jelly, suppository), lactational amenorrhea method (LAM), countryspecific modern methods, and respondent-mentioned other modern contraceptive methods (e.g., cervical cap, contraceptive sponge). Periodic abstinence (rhythm, calendar method), withdrawal (coitus interruptus), and country-specific traditional methods of proven effectiveness were considered as traditional methods while locally described methods and spiritual methods (e.g., herbs, amulets, gris-gris) of unproven effectiveness were the folkloric methods. To obtain a binary outcome, all respondents who said they used no method, folkloric, traditional, were put in one category and were given the code "0 = No" whereas those who were using the modern method were also put into one category and given the code "1 = Yes. "
---
Explanatory variables
Health decision-making capacity was the main explanatory variable. For health decision-making capacity, women were asked who usually decides on respondent's health care. The responses were respondent alone, respondent and husband/partner, husband/partner alone, someone else, and others. This was recorded to respondent alone = 1, respondent and someone (respondent and husband/partner, someone else, and others) = 2, and partner alone = 3. Similarly, some covariates were included based on theoretical relevance and conclusions drawn about their association with modern contraceptive utilisation [13,14,23]. These variables are age, place of residence, wealth quintile, employment status, educational level, marital status, age at first sex, and parity.
---
Analytical technique
We analysed the data using STATA version 13. We started with descriptive computation of modern contraception utilization with respect to health decisionmaking capacity and the covariates. We presented these as frequencies and percentages (Table 1). We conducted Chi-square tests to explore the level of significance between health decision-making capacity, covariates, and modern contraceptive utilization at a 5% margin of error (Table 2). In the next step, we employed binary logistic regression analysis in determining the influence of health decision-making capacity on modern contraceptive utilization among women in their reproductive ages as shown in the first model (Model I in Table 3). We presented the results of this model as crude odds ratios (cOR) with their corresponding 95% confidence intervals. We further explored the effect of the covariates to ascertain the net effect of health decision-making capacity on modern contraceptive utilization in the second model (Model III in Table 3) where adjusted odds ratios (aOR) were reported. Normative categories were chosen as reference groups for the independent variables. Sample weight was applied whilst computing the frequencies and percentages so that we could obtain results that are representative at the national and domain levels. We used STATA's survey command (SVY) in the regression models to cater for the complex sampling procedure of the survey. We assessed multicollinearity among our co-variates with Variance Inflation Factor (VIF) and realized that no multicollinearity existed with a mean VIF = 3.7.
---
Results
---
Socio-demographics and prevalence of modern contraceptive use
Among the respondents who participated in this study, 91% of them are married and about three-quarters of them (73.3%) have partners that are sole decision-makers
---
Table 3 Multivariate logistic regression results on the predictors of modern contraception utilisation among women in Chad
Variables regarding health issues (Table 1). Most of the respondents (79.8%) reside in rural settings with 69.5% having no education and about half (56.4%) between the ages of 20 and 34 (Table 1). A higher proportion of the respondents (56%) are not able to refuse sex when demanded.
The prevalence of modern contraceptive use is 5.7% [CI = 5.46-5.91] (see Fig. 1).
---
Association between use of modern contraceptive and the predictor variables
As shown in Table 3, in both the adjusted and the unadjusted models, respondents who take health decisions with someone are more than two times (aOR = 2.71; 95% CI = 1.41, 5.21 and OR = 2.38; 95% CI = 1.28, 4.42 respectively) likely to decide on using contraceptives than respondents who decide alone. It has also been observed that having at least a primary education positively affect the likelihood of using modern contraceptives; primary education (aOR = 2.34; 95% CI = 1.56, 3.50); secondary or higher education (aOR = 4.02; 95% CI = 2.44, 6.60) (see Table 3). Likewise, people who reside in rural areas are 53% less likely to patronize modern contraceptives (aOR = 0.47; 95% CI = 0.27, 0.82) than their counterparts who live in urban areas (see Table 3). Furthermore, women who are employed have higher odds of using contraceptives than those who are unemployed (aOR = 2.24; 95% CI = 1.54, 3.28). Along with that, women who have given birth at least four (4) times are 61% more likely to use modern contraceptives (aOR = 2.71; 95% CI = 1.41, 5.21) than those with no birth experience. It was observed that women who can refuse sex have higher odds of using modern contraceptives (aOR = 1.61; 95% CI = 1.14, 2.27) relative to those who are unable to refuse sex (see Table 3).
---
Discussion
This study was essential since the ability of sexually active women to make significant decisions on their health including choices of modern contraceptive use (i.e., condom use) can lead to good reproductive health [24]. We observed that the prevalence of modern contraceptive use in Chad among women in sexual union was 5.7%. Generally, about three-quarters (73.3%) of respondents who were married (91%) had partners as the sole decision-makers regarding their health issues, a finding similar to that of a study conducted in Ghana where only a quarter of women in the study took healthcare decisions single-handedly [25]. However, in a multi-country assessment, it was revealed that about 68.66% of respondents across the 32 nations studied could make decisions on their reproductive health [14]. The discrepancy might be attributable to the diverse research populations and the number of nations investigated. Furthermore, parties to decision-making regarding the health of women play an important role in the usage of contraceptives. It was found that respondents who took health decisions with someone were more than two times likely to decide on using contraceptives than respondents alone. Similar result was seen in Burkina Faso [26] and Mozambique [27] as spousal decision with women had a positive influence on the utilization of contraception. In terms of decisions taken solely by partners, women were 18% less likely to report an intention to use contraceptives [27]. Again, it was revealed in Pakistan that women whose partners were sole decision-makers were less likely to use contraception. This shows that a woman's inability to discuss and make decisions on health, especially on family planning issues can negatively affect the use of modern contraception.
Education has been recognised as a strong determining factor of contemporary contraception use. It exposes women to factual information as well as convinces their partners of the need for contraception [28]. This is relevant as we also observed that having at least a primary education induces higher odds of using modern contraceptives. Although the study establish education to be significantly associated with contraption use, a study conducted to measure the trend in the use of contraception in 27 countries in sub-Saharan Africa reported that an increase in the proportion of the study participants with secondary education did not affect the use of contraception [29]. In agreement with our finding, a high level of education has been found to increase the likelihood of using modern ways of delaying birth in women living in Uganda [30]. A plausible explanation to this is that as the level of education of women increases, women are more empowered to take charge of their health decision-making capacity. Since education empowers women to have autonomy over their reproductive rights [23].
With regards to the place of residence, we found that urban women were more likely to use modern contraceptives as compared to their counterparts in rural areas. a possible explanation is that women in urban areas may have better access to information, and are more likely to be interested in education, hence, the use of modern contraceptives to delay childbirth. Other reasons may be poor transportation access, long distances to access health facilities and shortage of contraceptives in the rural areas as compared to in the urban areas [31].This corroborates the findings of Apanga et al., [32] which they ascribed to the fact that there is a high prevalence of late marriage in urban areas as compared to rural areas [33]. Hence, there is a possibility that women in urban areas are likely to use modern contraceptives to avoid unwanted pregnancies.
Consistent with prior studies in Ghana, [3,34] women who are working had a higher likelihood of utilizing contraception methods than those who are unemployed. The reason for this is because, compared to their nonworking peers, the working class may be willing to do everything to maintain their employment and have more time for their occupations instead of having children, especially given their capacity to acquire contraceptives in comparison to their non-working peers. Also, working women are expected to have the financial backing to be able to make health decisions concerning their reproductive health. It is, therefore, no surprise that we found women within the richest wealth quintile to have the highest likelihood of utilizing modern contraceptives.
Finally, women who have given birth to at least four (4) children are more likely to use contraceptives than those with no birth experience. A plausible explanation is that multiparous women may not want more children hence resort to the use of modern contraceptives to either delay the next pregnancy or stop childbirth. This finding is similar to previous reports from Ethiopia and Tanzania which reported that as the number of living children increases, so does the usage of modern contraceptives [35,36].
---
Strength and limitations
We used a large dataset comprising 4,113 women aged between 15-49 which makes our results compelling. Findings from this study are also based on rigorous logistic regression. Despite these strengths, the study had some notable shortcomings. To begin with, the study's cross-sectional methodology limits causal inferences between respondents' individual factors and modern contraceptive use. Second, because most questions were answered using the self-reporting approach, there is a risk of social desirability and memory bias in the results. Furthermore, because this study only included women, the conclusions do not incorporate the perspectives of spouses. Finally, we believed that variables like cultural norms and health-care provider attitudes would be relevant to investigate in the context of this study, however, such variables were not included in the DHS dataset.
---
Conclusion
The study revealed that modern contraceptive utilization is very low among sexually active women in Chad. We conclude that health decision-makers, education, occupation status [working], higher parity and women's ability to refuse sex have positive association with modern contraceptive utilization among sexually active women in Chad. There is a need to step up modern contraceptive education and improve adherence among women in their reproductive years. In the development of interventions aiming at promoting modern contraceptive use, broader contextual elements must be prioritized. For instance, significant others such as partners and persons who make
---
Data availability
Data used for the study is freely available to the public via https://dhsprogram. com/data/available-datasets.cfm.
---
health decisions with or on behalf of women need to be targeted.
---
Abbreviations
---
Declarations Ethical approval
This study used publicly available data from DHS. Informed consent was obtained from all participants prior to the survey. The DHS Program adheres to ethical standards for protecting the privacy of respondents. The ICF International also ensures that the survey processes conform to the ethical requirements of the U.S. Department of Health and Human Services. No additional ethical approval was required, as the data is secondary and available to the general public. However, to have access and use the raw data, we sought and obtained permission from MEASURE DHS. Details of the ethical standards are available on http://goo.gl/ny8T6X.
---
Consent for publication
Not applicable.
---
Competing interests
The authors declare that they have no competing interest.
---
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 21,238 | 2,128 |
c044c6390881aef93e871395eb63861909c997da | Syndemic Factors and Resiliency Among Latina Immigrant Indirect Sex Workers in an Emergent Immigrant City | 2,018 | [
"JournalArticle"
] | Background: Female sex workers (FSW) constitute a highly vulnerable population challenged by numerous co-existing, or syndemic, risk factors. FSW also display resilience to these, and some evidence suggests that resilience may be associated with protective factors that improve health outcomes.We conducted in-depth interviews with indirect sex workers (n = 11) and their clients (n = 18). Interviews were coded utilizing an iterative, modified constant comparison method to identify emergent themes.We identified five syndemic risk factors (difficulty finding work due to undocumented status, shame and mental health hardship, lack of social support, alcohol use, and violence) and five resilient factors (rationalizing sex work, identifying as a "decent" woman, fulfilling immigrant goals, reducing alcohol consumption, and creating rules to reduce risk of violence and HIV/STIs). Discussion: Understanding the syndemic risk factors and resiliency developed by FSW is important to develop tailored, strength-based interventions for HIV/STIs and other risks. Female sex workers (FSW) are a highly vulnerable population, whose estimated risk of HIV is 13.5 higher than among similarly aged women [1]. Additionally, FSWs are likely to experience violence [2-4], suffer from depression and other mental health issues [5-8], substance use disorder [9][10][11], and face stigma and discrimination [12,13]. Many FSW experience more than one of these factors at a time. Latino migrant women who exchange sex are particularly vulnerable to health-related issues, including HIV and other STIs, as a result of their unique political, economic, social, and |
immigrant men report paying women primarily of Latino ethnicity for sex [17][18][19][20][21][22][23][24]. Sex industry typologies have described how sex work in the Latino immigrant population operates, identifying the place and manner in which transactional sexual encounters occur [25,26]. In a study from North Carolina, for example, interviews with service providers concluded that the existing public health infrastructure is not well suited to meet the health needs of highly mobile, unauthorized immigrant Latina sex workers [25].
There are few studies, however, eliciting information about the Latina sex work industry from Latina FSWs themselves [27]. We conducted in depth interviews with Latina immigrant women living in Baltimore City who exchange sex for money and goods and their clients, and found that most of these women engaged in indirect sex work. Although these women engaged in transactional sex to earn extra money, they were not considered sex workers or prostitutes by themselves or the community [26,28]. In this study, we evaluate how syndemic risks and resiliency impact the health risk of Latina immigrant FSWs engaging in indirect sex work.
Syndemic theory postulates that assessing the overall impact of co-occurring factors, such as substance use and mental health issues, provides a better assessment of HIV risk than considering the additive effects of separate factors, [29]. This theory provides a useful framework to explore the complex and multiple challenges faced by female sex workers [30]. However, focusing only on deficits and challenges faced by vulnerable populations minimizes the strength and potential of individuals and communities living in difficult situations. There is, therefore, a compelling argument to focus on resiliency, or the "the process of overcoming the negative effects of risk exposure, coping successfully with traumatic experiences, and avoiding the negative trajectories associated with risk" [30][31][32]. Evidence suggests that increased resilience may be associated with protective factors that improve health outcomes [33]. In this study, we evaluate how syndemic factors and resilience influence the behavior of immigrant Latina FSW in an effort to identify strategies that may mitigate HIV risk in this population.
---
Methods
---
Recruitment and Data Collection
We conducted 32 in-depth interviews with Latina sex workers and their Latino immigrant clients. All interviews were conducted between July 2014 and April 2015. Eligibility included being: 1) ≥ 21 years old; 2); being born in a Spanish-speaking Latin American country, and 3) having engaged in transactional sex with a Latino immigrant man (if sex worker) or Latina woman (if client) within the past year in Baltimore, Maryland. Transactional sex was defined as exchanging vaginal and/or anal sex for money (i.e., cash, rent, or payment of bills), material goods (i.e., presents, drugs), and/or housing. Participants could therefore be engaged in "direct" sex work, in which the primary purpose of the interaction is to exchange sex for a fee, or "indirect" sex work in which sex is exchanged for a fee but not recognized as sex work. After learning from clients that street-based FSW are most likely to be U.S.-born Latinas, we expanded recruitment to include two U.S.-born Latina FSW. The analysis for this manuscript focuses on indirect sex work; 11 of the 14 FSW interviewed had experience with indirect sex work in the past year, and all of the male clients had engaged in transactional sex with a Latina indirect sex worker.
The participants were recruited through snowball sampling with coupons for referrals. Initial participants were identified through our community network. At the end of each interview, the participants were asked if they knew of another person who may be eligible and interested in completing an interview. If the interviewee referred an eligible person who completed an interview, they were provided $50.
Two trained Latina immigrants with extensive experience conducted the interviews. Interview questions included migration history, local social support, perceptions of sex work in the local Latino community, sex work history, current sex work practices, experiences with violence, perceptions of HIV/STI risk, and access to health care. Interviews took place in a private location convenient to and trusted by the participant (i.e., local restaurants, public parks). Interviews were audio recorded with participant consent, and lasted 45-90 minutes each. Sex workers were compensated $100 USD for their time, and clients were compensated $50 USD for their time.
---
Data Analysis
The audio recording of each interview was transcribed verbatim. Transcripts were then cleaned of any possible identifiers, translated into English, and reviewed for accuracy. Spanish and English transcripts were then imported into Atlas.ti qualitative software. Transcripts were reviewed as the research was conducted so that the analysis of the early interviews could inform those that occurred later. Data analysis of the text was conducted using an iterative, constant comparison coding process. A team of two coders independently coded the cleaned transcripts (one in Spanish, one in English), generating as many concepts as possible before moving on to selective coding. These concepts were then consolidated into themes and subthemes [34,35]. Thematic codes were compared within a single interview and between interviews [34].
The Johns Hopkins University School of Medicine Institutional Review Board (IRB) and the Baltimore City Health Department approved all protocols.
---
Results
Ten themes emerged as syndemic risk factors or resiliency. The themes address the lived experience and impact of indirect sex work on the Latina immigrant sex workers. Participant demographics are presented in Table 1.
---
Syndemic risk factors
Difficulty finding work due to undocumented status.-The women were overwhelmingly living in the U.S. without documentation. As a result of this, the women expressed great difficulty finding employment that adequately paid. Described one Honduran woman who worked through an agency that would place undocumented workers in jobs: "Latina women are heavily exploited here. Heavily…The problem is that you work there through an agency and the agency keeps a percentage of each person. You are paid a miserable pittance…They keep the rest of the money." Other women found work through family members or acquaintances, but these were low paying and often not many hours a week. Said another Honduran woman: Why do we do this [sex work]? [Because] it's difficult to find a job." Shame and mental health hardship.-Many of the women interviewed wanted to find an alternative way to make money and expressed shame in needing to sell sex. One woman from Costa Rica who had lived in the U.S. for 8 years said, for example: "I know that even though I am paid for my body, it will never be enough of a price because you must have values, must have dignity…I say well he can give me this much. I know it isn't right." Almost all of the women interviewed commented that they were very discreet in these interactions: "No, no one [knows]. No one… we all play it like we are proper and decent ladies."
For some women, this shame influenced their mental health and wellbeing. Said one Honduran who worked in a bar: "I get depressed. I cry a lot. Sometimes I get drunk every day because I don't want to know anything. A lot of depression. Once I tried to commit suicide. I don't want to get drunk. I don't want this life. I want to be someone."
Lack of social support.-Although the women initially came to Baltimore because they knew someone there, the women reported minimal to no social support. Described one woman from Costa Rica: "Support here? No. Trust? Just me. But I consider [a former roommate] a friendship….[I can get help from her if needed] depending on the help." One woman from El Salvador described being able to get help if needed from two men who were her sex work clients: "When I want someone to talk to, I have friends. I call a guy, a man named [removed], another one called [name removed]. They help me." Many of the women, however, were unable to identify someone who could help them if they needed support of any kind, and the women who could identify support recognized this support as limited and/or with conditions. Alcohol Use.-Alcohol use was most prominent among bar workers. In the bars, women are hired by Latino immigrant men frequenting the establishment to serve them drinks and provide company. The beer, typically around $3 USD, may cost up to $20 with the woman's company. A Salvadorian bartender explained this: "There is an obligation to drink because otherwise the tips are little... [So you have] to be invited for a beer, because a beer costs $20. Half is for me and half is for the owner." The range of alcoholic drinks consumed in a night varied among participants from 5 to 20. Said one Honduran worker: "You have to drink a lot. If you don't drink, you don't make much…I think I drink too much in this country, from working in here. [I drink] beer. Sometimes 15, 12 [a night]." Alcohol use was also mentioned in connection with other types of indirect sex work but this was not common.
Violence.-Women in all types or venues of sex work experienced violence or threats of violence from their clients. One Honduran sex worker who met clients through the bar described how frequent this is: "Every women, as I told you, we are mistreated but we don't say anything because we are ashamed…I was badly hit [by a client] and said it was an accident. I still have marks on my body and I said I had fallen down. But no, a guy hit me."
The violence or threats of violence women faced largely resulted from disputes of willingness to engage in transactional sexual activities that the man wanted or the amount of money to be paid. Described one client from Honduras: "Imagine, I've invited a girl, she is sitting here on my lap or next to me and I am spending money on her… Then another guy comes and just because she wanted, she leaves [me for] him. Of course I won't like that situation. I am spending on her…so a violent person gets angry and the quarrel begins."
---
Resiliency: Protective Factors to Mitigate Risks
Rationalizing sex work.-The women interviewed expressed that many Latina immigrants in Baltimore occasionally engaged in transactional sex. One Honduran woman who worked in a bar and sold sex in addition to working in a factory and providing cleaning services stated:
Almost 5 [out of 10], something like that. It's quite common, quite common. If women's wages were different, we wouldn't need to do this…The thing is that Latin women do it [sell sex] here. Yes. They do it because they need to. They don't have a husband who pays for all they need.
Identifying as a "decent woman."-Despite feeling shame, the indirect sex workers maintained a standing of "decent" or "respectable" women through their typically slow approach to gaining male clients. As defined by one client, when discussing the indirect sex workers he meets at the bars, "A respectable woman is one with whom it's not so easy to have sex. You need… 'who's that person? What's their name?' and all that." Described another man from Honduras: "I [am there] today, I invite the girl. Tomorrow, I invite her again. And that's it, talking. Where are you from, do you have children… 'not now, wait' [they say] but they usually will [eventually] say yes." The indirect sex workers take pride in being "decent" women, and include in this definition only engaging in vaginal sex.
Selling sex as needed to fulfill immigration goals.-As indirect sex workers working independently, the woman all decided when to sell sex and to whom. For many of the women, selling sex, only when independently decided, provided an opportunity to gain a sense of empowerment and success while attempting to survive as an immigrant in a new environment that is overwhelmingly difficult and without a supportive network. Specifically, this was tied to their ability to do what they sought to do by coming to the U.S. -provide for their family. Said one woman from El Salvador: Reducing alcohol consumption.-Many of the bar workers recognized that their level of alcohol consumption was not healthy or conducive to their safety, either physically from a man or during sex by control of condom use. As a result, many of the women shared strategies to reduce alcohol consumption while still earning money through selling drinks and company with men at the bars. For example, one woman described fooling a man: "One of my co-workers taught me that I could discreetly throw it away [by pouring some of the beer on the floor or in the trash]."
---
Creating rules to maintain control and reduce risk of violence and HIV/STIs.-
In an effort to reduce their risk of violence and HIV/STIs, many indirect sex workers have rules they follow to "vet" a potential client. These include getting to know the potential client first, or having other people they know and trust vouch for the potential client. Women who sell sex independently when needed, for example, rely on gaining clients from men they already know and trust -often previous or current clients they consider friends. One client from Honduras described how women in the bars do this:
In the bars, you meet the women, you talk with them, they ask you to invite them some drinks, beers…Once you know them and began talking with them, you have a kind of friendship, you tell them, "You are beautiful." And that you like her…So you invite her to go out to eat one day, you can go for a ride with her, and after that… you do it. If not, you have to reactivate the relationship until she accepts. Not every woman accepts it; and those who do accept want to be motivated and you have to gain their trust."
---
Discussion
In this study, we used syndemic theory and a resiliency framework to document the experiences of Latina immigrant FSW who participate in indirect sex work. We demonstrate high levels of resilience among these women, even as they faced multiple co-existing syndemic risk factors, many of them at the structural or community level and out of their locus of control. Specifically, the women understood their sex work as a means for economic independence and altered the behaviors under their control to reduce risk of HIV/STIs and violence.
The FSW in this study experienced multiple and severe syndemic risk factors. For most, limited work opportunities because of their documentation status led to sex work, but at a high price. The women expressed high levels of guilt and shame leading to depression, increased alcohol use, and even a loss of agency to confront partner violence. In addition, the women had very limited social support. Smaller social networks and isolation among Latino immigrant is associated with depressive symptoms and poor physical and mental health [36]. Despite these challenges, the women exhibited important elements of resilience that help them cope with their situation and gave them a sense of control and self-efficacy. Resilience operates on three levels: 1) the social environment (e.g., neighborhoods and social supports), 2) the family (e.g., attachment and parental care), and 3) at the individual level (e.g., attitudes, social skills, and intelligence) [37]. Among the FSWs in this study, social and family support were limited or non-existent, and, therefore, they demonstrated resilience primarily at the individual level. For example, despite pressures to drink heavily as part of the job, the women found ways to mask or fake consumption of alcohol as a means of retaining control of the situation and to protect their health. They also established rules of engagement with their clients in order to reduce their risk of violence and HIV/STIs. In addition, the women justified their work as necessary to complete immigration goals, with emphasis on the sacrifice done for the wellbeing of their families. These types of adaptive coping strategies have been shown to reduce emotional distress [38]. Self-efficacy, or a sense of control over thoughts, feelings, and environment through action, as demonstrated by these women, can protect against stress and promotes physical and mental well-being [39][40][41][42].
Understanding the resilience of the FSW as described through their narratives can help develop strength-based interventions to reduce co-existing risk factors, including HIV/STIs. For example, social isolation and shame were prominent syndemic factors that the women discussed individually without recognizing that these feelings were a common thread throughout the interviews. This suggests that there may be an opportunity for women who do this work to share experiences and learn from each other. In Baltimore, for example, group therapy sessions for undocumented immigrants reduce social isolation by providing an opportunity for people with shared background, experience, and beliefs to discuss issues and coping strategies related to the migration experience [43]. A similar approach, adapted for women who engage in sex work, could reduce social isolation and help FSW recognize their strength and resilience. Other approaches could include interventions that address the women's priorities and health concerns, and training that builds on the skills they currently utilize to reduce risk. Strategies to reduce alcohol consumption can be adapted for the context of bar work, and partnerships with police (in cities where police do not cooperate with immigration authorities) may encourage women to report and seek help for genderbased violence. Interventions to improve job opportunities, such as English language classes, and partnership with law services to gain documentation if eligible, would address a top priority for these women by providing them with more options for economic independence.
This study has several limitations that are important to recognize. This study utilized a relatively small sample of indirect FSW who engaged in sex with Latino men and the findings cannot be generalized to Latina sex workers who engage in traditional direct sex work. The findings are specific to women and not generalizable across genders, sexual identities or orientations, or other ethnic/racial groups.
---
Author Manuscript
Grieb et al. | 18,449 | 1,646 |
b76e0d5c53d627183a98ba599f9f6f727254950e | Why are Chinese workers so unhappy? A comparative cross-national analysis of job satisfaction, job expectations, and job attributes | 2,019 | [
"JournalArticle",
"Review"
] | Using data from the 2015 International Social Survey Program (ISSP), this study conducts a multinational comparison of job satisfaction determinants and their drivers in 36 countries and regions, with particular attention to the reasons for relatively low job satisfaction among Chinese workers. Based on our results from a Blinder-Oaxaca decomposition analysis, we attribute a substantial portion of the job satisfaction differences between China and the other countries to different job attributes and expectations; in particular, to unmet job expectations for interesting work, high pay, and opportunities for advancement. We also note that, contrary to common belief, Chinese workers value similar attributes as Western workers but perceive their work conditions as very different from those in the West. | Introduction
Both academics and HR specialists recognize that keeping workers happy is important for the organization because satisfied workers-being more productive [1][2][3][4][5], more loyal, and less likely to leave their jobs [6][7][8][9][10][11]-can positively impact company performance [12][13][14][15]. Not only does a comprehensive review study find a significant correlation between job satisfaction and job performance, especially in complex jobs [16], but other research associates low levels of job satisfaction with higher levels of absenteeism and counterproductive behavior [17,18]. The extent to which workers consider their jobs satisfying is thus now a major focus in many disciplines, including psychology, economics, and management [19][20][21][22][23][24].
China offers a particularly interesting case study for job satisfaction because its Confucianbased work ethic of hard work, endurance, collectivism, and personal networks (guanxi) expects Chinese employees to devote themselves to and take full responsibility for the job, work diligently, and generally align their values and goals with those of the organization [25]. Deeply rooted in this Confucianism is the construct of Chinese individual traditionality reflecting "a moral obligation to fulfill the normative expectations of a prescribed role to preserve social harmony and advance collective interests" [26]. Hence, for the traditionalist Chinese, self-identity is defined by role obligations within networks of dyadic social relationships, which may imply less relevance for the job satisfaction determinants that matter in Western countries. Yet one of the rare nationwide studies that examined job satisfaction in China [27], found not only that job satisfaction among employees aged 16-65 is relatively low-with only 46% explicitly satisfied-but also that worker expectations differ significantly from what their jobs actually provide. In particular, many jobs are less interesting than expected, which prevents workers from realizing their perceived potential, creating an expectations gap that is a strong determinant of job satisfaction. Unlike research for Western countries, however, their study finds no link between job satisfaction and turnover, an outcome they attribute to China's unique Confucian-based work ethic.
Despite this clear documentation of relatively low job satisfaction in China, however, few extant studies systematically and comprehensively compare such satisfaction with that in other countries. To begin filling this void, this present analysis draws on data for 36 countries, including China, from one of the most comprehensive cross-national surveys on job satisfaction ever conducted. One unique aspect of this survey is that it collects information not only on actual job characteristics but also on worker perceptions of what an ideal job should entail. As pointed out by Locke, "Job satisfaction is the pleasurable emotional state resulting from the appraisal of one's job as achieving or facilitating the achievement of one's job values. Job dissatisfaction is the unpleasurable emotional state resulting from the appraisal of one's job as frustrating or blocking the attainment of one's job values or as entailing disvalues. Job satisfaction and dissatisfaction are a function of the perceived relationship between what one wants from one's job and what one perceives it as offering or entailing" [23]. It is thus this expectations gap which is fundamentally driving job satisfaction. Unfortunately, much of the job satisfaction literature focuses solely on job attributes, and not on how these are evaluated. Hence, in addition to decomposing job satisfaction differences between China and other country clusters (using the Blinder-Oaxaca method), we are also able to determine the extent to which work-related expectations are being met and how they relate to low job satisfaction, thereby helping to explain its drivers. In doing so, we also provide additional evidence to a previous study [27] that found lower job satisfaction in China, particularly in relation to Western countries.
Identifying the determinants of job satisfaction in China and understanding differences in these determinants to other countries is important from a management perspective. Western countries are investing billions in China and many multinational companies have set up major manufacturing and distribution facilities in China. These companies not only employ very many Chinese workers, they are also frequently managed by international teams that often apply Western HR concepts. Yet considering China's very different social and cultural background, it is important to assess Chinese employees' responses to such Western HR concepts. In this paper we provide evidence on what Chinese workers value in a job and how these values differ to workers in other countries. This is an important precondition for a deeper understanding of the effectiveness of HR policies in China.
---
Previous research
Despite a large body of literature on the determinants of job satisfaction [6, 19-22, 24, 28-35], 2021the research for China is restricted mostly to particular geographic areas [36][37][38][39][40][41][42][43] or specific occupations, including teachers [44][45][46], physicians [47,48], nurses [49][50][51][52][53], civil servants [54], and migrant workers [55,56]. To our knowledge, only four studies focus broadly on all employees across the nation. The first, based on 2002 China Mainland Marketing Research Company data for 8,200 employees in 32 cities, identifies age, education, occupation, and personal income as the main determinants of job satisfaction [57], while the second [58], drawing on 2008 Chinese General Social Survey (CGSS) data for urban locals, first-generation migrants (born before 1980), and new-generation migrants (born 1980 or thereafter) pinpoints income and education. The third study, based on 2006 CGSS data, not only identifies lower job satisfaction among female employees than among male employees, but positively associates job satisfaction with higher levels of education and communist party membership [59]. It also demonstrates, however, that job tenure, job security, earnings, promotion, and having a physically demanding job are significantly and positively correlated with job satisfaction for both sexes [59]. The final study [27] is already referenced, which uses a combination of 2012 China Labor-Force Dynamic Survey (CLDS) data and 2012-2014 China Family Panel Studies (CFPS) data to document the relatively low Chinese worker job satisfaction and significant job expectation gap, which reduces worker ability to reach perceived potential and greatly determines (low) job satisfaction.
Although the number of cross-national analyses in this area is limited, one study [24], using data from the 1997 International Social Survey Program (ISSP), document that 79.7% of employees in 21 countries report being fairly satisfied or satisfied with their job. Such satisfaction is significantly impacted by work-role inputs and outputs, with having an interesting job and good relations with management being the major determinants. Subsequent work [35], based on data from phase two of the Collaborative International Study of Managerial Stress (CISMS 2), reports a significantly lower average job satisfaction for their Asian country cluster (7.9) than for their Anglo Saxon (9.6), Eastern European (9.2), and Latin American (9.6) country clusters, with a 2-item job satisfaction measure ranging from 2 to 12. Their results support the assumption that the linkages between work demands and work interference with family (WIF) and between WIF and both job satisfaction and turnover intentions are stronger in individualistic Anglo-Saxon countries than in more collectivistic world regions, including Asia, Eastern Europe, and Latin America.
Other research focuses either on specific subpopulations of the workforce or particular aspects, such as skills and benefits. For instance, one of these previous studies [31], using 1994-2001 European Community Household Panel (ECHP) data, demonstrate that selfemployed workers are more likely than paid employees to be satisfied with their present job type but less likely to be satisfied with the corresponding job security. More recent work [60] uses 2005 ISSP data for 32 countries, shows that women and mothers occupy more satisfying jobs in countries with more extensive workplace flexibility. As regards job skills, another more recent study [61], using Programme for the International Assessment of Adult Competencies (PIACC) data for 17 OECD countries, reports that the impact of labor mismatches on job satisfaction is generally better explained by skills mismatch, although educational mismatches have a greater effect on wages. Lastly, drawing on Global Entrepreneurship Monitor (GEM) data, some literature reveals that although entrepreneurial innovation benefits the job satisfaction, work-family balance, and life satisfaction of entrepreneurs globally, in China, it benefits only satisfaction with work-family balance and life-not job satisfaction [62].
As this brief review underscores, with the notable exception of the recent study [27] mentioned above, not only are representative investigations into job satisfaction determinants in China rare, but, more important for our study, so are cross-national studies, especially ones addressing China's relatively low level of employee job satisfaction. We are also unaware of studies which explicitly assess job attributes, that is the extent to which certain attributes are present and also cherished. Hence, to expand understanding of this issue, we decompose the job satisfaction differences between China and several other country clusters to assess the universality and generalizability of particular determinants of job satisfaction and, importantly, the extent to which differing expectations about a job explain China's job satisfaction level.
---
Data and methods
Data: Our analysis is based on data from the 2015 ISSP, an ongoing collaborative administration of annual cross-national surveys on topics important for the social sciences. Begun in 1984 with four founding members, the program now includes about 50 member countries from all over the world. Whereas three previous surveys (1989, 1997, and 2005) included a section on work orientation and collected data on job attitudes and job characteristics, China did not participate in this module until 2015. Drawing on this 2015 data set, we analyze a sample of 17,938 individuals in 36 countries and regions: Austria, Belgium, Chile, China, Croatia, Czech Republic, Denmark, Estonia, Finland, France, Georgia, Germany, Great Britain, Hungary, Iceland, India, Israel, Japan, Latvia, Lithuania, Mexico, New Zealand, Norway, Philippines, Poland, Russia, Slovakia, Slovenia, South Africa, Spain, Suriname, Sweden, Switzerland, Taiwan (province of China), and the United States. The ISSP survey is usually included in other large surveys (with only a handful of countries conducting single surveys). In most countries, face-to-face interviews with multi-stage sampling were conducted (in some countries such as Poland questionnaires were self-completed with interviewer involvement). All surveys were conducted in the national language(s). Translations were evaluated by experts and, in some countries, by back-translation. Each country used a specific stratification strategy, with China, for example, using education, GDP per capita, and urbanization [63]. Our final sample excludes all self-employed workers to cover only those currently in paid employment (see S1 Table for summary statistics for the entire sample and for China only). Note that the sampling procedure differs somewhat in each country, and the main sampling quotas (i.e., age, gender and education) are based on the composition of the whole population of a respective country, not just on the labor force.
Defining clusters of countries: When comparing China's job satisfaction with the job satisfaction in other nations, one must decide on how to construct a comparison group. Several options are possible, including country-by-country comparisons, comparing China with "the rest of the world", or grouping countries according to some characteristics. In order to take account of the heterogeneity in job characteristics and job expectations among countries, and yet to provide insights in a summarized and tractable way, we have opted for clustering countries according to a few economic and sociodemographic characteristics. We partition the remaining 35 countries and regions into 3 clusters by using the k-means clustering algorithm [64]. The algorithm begins by assigning a random number to each observation. These serve as initial cluster assignments for the observations. For each of the clusters the algorithm then computes the clusters' centroids (vector of the clusters' variable means) and assigns each observation to the cluster whose centroid is closest (where closest is defined using Euclidean distance). This process continues until assignments do not change anymore [65]. To obtain a valid assignment of each country to a specific cluster, we run the algorithm 400 times, with each iteration using different cluster assignments at the beginning. We base our cluster analysis on the country specific mean values of certain variables within the data set, namely working hours, income in US-Dollars, age, years of education, family size and marital status. The obtained clusters are the following:
• Cluster 1: Chile, Croatia, Czech Republic, Estonia, Georgia, Hungary, India, Latvia, Lithuania, Mexico, Philippines, Poland, Russia, Slovak Republic, Slovenia, South Africa, Spain, Suriname, Taiwan (province of China).
• Cluster 2: Australia, Denmark, Iceland, Norway, Switzerland.
• Cluster 3: Austria, Belgium, Finland, France, Germany, Israel, Japan, New Zealand, Sweden, Great Britain, United States.
The largest Cluster 1 includes all Eastern European countries, Russia, the Baltic states, as well as a few other countries from Asia, Western Europe and South America. Cluster 2 primarily captures the Nordic countries, as well as Switzerland and Australia. Cluster 3 is made up of primarily Western European countries and the United States. Summary statistics for each cluster are presented in S2 Table . As can be seen in the summary statistics, Cluster 1 is characterized by a higher number of average working hours, as well as a larger family size on average compared to Clusters 2 and 3. The average monthly income and the educational level are substantial lower in Cluster 1 compared to the other two clusters. The mean age in Cluster 2 is the highest among all three clusters. Cluster 2 also exhibits the most educated and (in terms of income) wealthiest population, yet having the lowest amount of weekly working hours. Only minor differences among Clusters 2 and 3 exist with regards to the average marital status and family size of the population.
Although our objective is to construct homogenous clusters based on economic and sociodemographic variables, we also conducted an analysis using the GLOBE country classification, which groups nations by cultural characteristics; however, the main conclusions remain unchanged. In a further sensitivity analysis, we also clustered countries according to their Human Development Index which is a composite index of life expectancy, education, and per capita income at the country level. The main conclusions of this paper, however, remain unchanged.
Country differences: Although the 7-point scaling of our job satisfaction measure might suggest a latent variable estimation approach as the most appropriate, because the bias introduced by an OLS analysis is relatively small [66], we employ the standard OLS regression method applied in the majority of SWB studies [67]. Hence, to pinpoint the differences among countries, we estimate a series of linear regressions (OLS) of the following form:
JS i ¼ b 0 þ b 1 C i þ b 2 S i þ b 3 A i þ b 4 E i þ ε ið1Þ
where JS i denotes job satisfaction of individual i, C i is the country dummy variable (with Germany as the reference group because its job satisfaction mean falls roughly mid sample). S i , A i and E i represent socioeconomic and demographic characteristics, work attributes, and work expectations, respectively, while ε i is the error term. Job satisfaction is measured by the question, "How satisfied are you in your (main) job?" with responses measured on a 7-point scale from "1 = completely satisfied" to "7 = completely dissatisfied." For convenience of interpretation, we recode the values so that 7 reflects the highest job satisfaction and 1 the lowest. This job satisfaction measure, although based only a single-item, is empirically documented to be acceptable [68]. Work attributes. Based on prior literature and data availability, we use seven variables to capture work attributes:
• Hours worked per week (including overtime).
• Work time conditions: Based on the response to "Which statement best describes how your work hours are decided? 1 = fixed time, 2 = decide with limits, and 3 = free to decide," we create two dummy variables for 1 and 3, with 2 as the reference.
• Daily work organization: Using responses to "How is your daily work organized? 1 = not free to decide, 2 = with certain limits, 3 = free to decide," we again formulate two dummy variables for 1 and 3 with 2 as the reference.
• Work schedules: Based on responses to "Which statement best describes your usual working schedule in your main job? 1 = decided by the employer, 2 = scheduled with changes, 3 = regular schedule," we generate dummies for 1 and 3, with 2 as the reference.
• Employer-employee relations: From responses to the question, "In general, how would you describe relations at your workplace between management and employees? 1 = very bad, 2 = quite bad, 3 = neither good nor bad, 4 = quite good, 5 = very good," we derive a 3 category coding of 1 = bad, 2 = neither good nor bad, 3 = good, from which we create dummies for 1 and 3, with 2 as the reference.
• Relations between colleagues: We similarly recode the responses to "In general, how would you describe relations at your workplace between workmates/colleagues? 1 = quite bad, 2 = very bad, 3 = neither good nor bad, 4 = quite good, 5 = very good" as 1 = bad, 2 = neither good nor bad, 3 = good, and generate the two dummies for 1 and 3, with 2 as a reference.
• Work pressure: From responses to "How often do you find your work stressful? 1 = never, 2 = hardly ever, 3 = sometimes, 4 = often, 5 = always," we derive a 4-category recoding of 1 = never, 2 = sometimes, 3 = often, 4 = always, and define three dummy variables for 1, 3, and 4, with 2 as the reference.
Work expectations. We assess work expectations based on the discrepancy between personal importance (what is wanted) and perceived outcome (what is obtained) of a given work facet; namely, job security, income, job interest, promotion opportunities, work independence, usefulness to society, helping others, and contact with other people. We derive our variables from responses to related survey questions, all measured on a 5-point scale. Specifically, individuals are first asked to assess the importance of these job attributes on a scale ranging from 1-5 (from 1 = not important at all to 5 = very important). Thus, for example, the question related to job security is formulated as follows: "How important is job security?" Individuals are then asked to assess their current job using the same scale (from 1 = strongly disagree to 5 = strongly agree). In the case of job security, the question is as follows: "How much do you agree or disagree that it applies to your job: my job is secure". We then calculate work expectations by subtracting the value assigned a specific job characteristic's importance from the value depicting its actual presence in the job, thereby capturing unmet expectations with variables valued from -4 to 4. Clearly, a negative value has a conceptually different meaning than a positive value. More specifically, a negative value indicates that a characteristic of the current job is more pronounced than the importance giving to it, whereas a positive value is more akin to unmet expectations. In order to take these different concepts into account, our regressions include dummy variables for each characteristic that are equal to one if the difference is negative or zero, and zero otherwise. It should be noted that, for most job characteristics, values are seldom negative (less than 10% of observations). Only with regards to "contact with other people" do we have 36% negative values, indicating that about a third of workers have contact with other people, but do not value this characteristic highly.
---
Socioeconomic and demographic variables
Our socioeconomic and demographic controls are those usually included in job satisfaction regressions [6,24]; namely, age, gender (a dummy equal to 1 for males, and 0 for females), education (measured by years of schooling), and family size. Marital status is recoded into three dummies for married, divorced, and widowed (with single as the reference). To capture personal income, we convert income data into a categorical variable based on a 3-point scale from 1 = low to 3 = high, with the top and bottom 25% of personal income defining a country's high and low levels, respectively, and the middle 50% designating the mid-level (with low as the reference category).
Decomposing job satisfaction differences: To identify which specific determinants account for the job satisfaction gap between China and other countries, we employ a mean-based Blinder-Oaxaca (BO) decomposition [69,70] that assumes a linear and additive nexus between job satisfaction and a given set of characteristics. One advantage of BO decomposition over regression analysis is that it quantifies the contribution of specific factors that account for job satisfaction differences between China and a specific cluster. In our case, the total difference in mean job satisfaction can be decomposed as follows:
� Y C À � Y Cl ¼ ð � X C À � X Cl Þ bC þ � X Cl ð bC À bCl Þð2Þ
where � X i is a vector of the average values of the independent variables and bi is a vector of the coefficient estimates for China (denoted by C) and a specific cluster (denoted by Cl). In Eq (2), the first (explained) term on the right indicates the contribution of a difference in the distribution of determinant X, while the second (unexplained) term refers to the part attributable to a difference in the determinants' effects [71]. The second term thus captures all the potential effects of differences in unobservables. In keeping with the majority of previous research using decomposition [72], we focus on the explained terms and their disaggregated contribution for individual covariates, with a variable's contribution given by the average change in the function if that variable changes while all other variables remain the same. It is important to note that this decomposition does not reveal causal relations but rather decomposes the change in job satisfaction between China and some cluster by assessing the change in the observables associated with job satisfaction. These are merely associations and cannot infer the direction of a relationship. Thus, it is conceivable that certain expectations not only affect job satisfaction, but that job satisfaction may in turn affect expectations and the general assessment of a job. Hence, although we follow common practice in speaking of the "explained" part of the decomposition, we do so in full awareness that the analysis is not causal.
---
Results
As Table 1 shows, average levels of job satisfaction range from 5.786 in Austria to 4.342 in Japan, with China, at 4.745, ranking second worst and substantially lower than the sample mean of 5.322. It is interesting to note that two Confucian Asia countries (Japan and China) are ranked last among 36 countries. Japan's low average job satisfaction is quite impressive, being 0.403 points lower than that of China. The third Confucian Asia region, Taiwan, is also ranked quite low, yet its job satisfaction is a significant 0.430 points higher than that of China. Taiwan also has a higher average job satisfaction than non-Confucian countries such as France, Australia and Poland.
Taking Germany as the reference country, our Fig 1 comparison shows its average job satisfaction to be 0.7 points higher than that of China. When we then run a series of regressions to assess the extent to which socioeconomic and demographic variables, job attributes, and job expectations affect this ranking, we find that the socioeconomic and demographic variables make little difference, but job attributes and job expectations substantially reduce the size of China's coefficient (Fig 2). More specifically, whereas job attributes reduce the coefficient by 33%, adding in job expectations lowers it by 72%, meaning that these two sets of variables explain about two-thirds of the job satisfaction gap between Germany and China. The fact that even after we control for these three variable sets, China's coefficient is a significant -0.19 indicating that cultural differences (probably in answering subjective questions on well-being) may play a certain role in job satisfaction differences among countries.
For a more in-depth explanation of China's markedly low levels of job satisfaction, we decompose the satisfaction differences between China and our three country clusters. The differences in average job satisfaction are presented in Fig 3 and they reveal a substantially lower average for China, but relatively small differences among the three clusters. The results of the BO decomposition are presented in Table 2 and they show that 30%-46% of the job satisfaction differences between China and the three clusters are associated with differences in socioeconomic and demographic characteristics, job attributes, and job expectations. More specifically, 31% of the gap between China and Cluster 1 is associated with differences in job attributes and job expectations, while 44% of that between China and Cluster 2 and Cluster 3 is associated with differences in job expectations. Thus, unmet job expectations appear to be a major driver of China's low levels of job satisfaction. We summarize the five variables that account for most of the job satisfaction differences between China and the three clusters in Table 3, and graph the job expectations gap in Fig 4 .
What is evident from both graphics is that unmet expectations for an interesting job is by far the most important variable, accounting for 19-34% of the job satisfaction difference (Table 3). In fact, as can be seen in the descriptive statistics in S1 and S2 Figs, although about 82% of the Chinese workers believe that having an interesting job is important, compared to Croatia, Czech Republic, Estonia, Georgia, Hungary, India, Latvia, Lithuania, Mexico, Philippines, Poland, Russia, Slovak Republic, Slovenia, South Africa, Spain, Suriname. Cluster 2: Australia, Denmark, Iceland, Norway, Switzerland; Cluster 3: Austria, Belgium, Finland, France, Germany, Israel, Japan, New Zealand, Sweden, Great Britain, United States. https://doi.org/10.1371/journal.pone.0222715.g003 91%, 95%, and 93% or workers in Clusters 1-3, respectively; only 36% consider their jobs interesting, compared to 65%, 78%, and 74% in Clusters 1-3, respectively. Unmet expectations for income also matter, with about 94% of Chinese workers thinking it important to earn a high income, versus only about 90%, 70%, and 77% of workers in Clusters 1-3, respectively (S1 Fig) . Again, however, only about 23% of the Chinese sample agrees that the current position offers a high income, compared with 34% and 30% for Clusters 2 and 3, respectively (S2 Fig) . Unmet expectations for income thus appear to have a greater influence on job satisfaction level in China than in more Western economies.
Another aspect that contributes to job satisfaction is the freedom to organize one's own daily work, which only 15% of Chinese workers report having, compared to 27%, 25%, and 26% in Clusters 1-3, respectively (S3 Fig) . Even the good relationships with colleagues, which 78% of the Chinese workers admit to having, is significantly lower than the 85%, 90%, and 86% reported by workers in Clusters 1-3, respectively (S3 Fig) . Good relations with the employer (at 66%) are also slightly lower in China than in other countries (74%, 73%, and 72%
---
Discussion and conclusions
Given the scarcity of cross-national job satisfaction research that includes China, this present analysis of 2015 ISSP data is most probably the first comprehensive comparison of job satisfaction in China with that in a large sample of other countries. As anticipated by a previous study [27], our results confirm that job satisfaction in China is substantially lower than in most of the other countries studied, ranking second to last of 36. By clustering these countries into three homogeneous groups based on observable economic and sociodemographic characteristics, we are able to identify several reasons for this relatively low job satisfaction, three of which are particularly important.
The most notable driver of low job satisfaction across all comparison clusters is unmet expectations for how interesting a job should be. Although Chinese workers' expectations for this attribute are similar to those of workers in other countries, they consider their jobs substantially less interesting. This finding supports the claim that a large proportion of jobs fail to satisfy worker interests [27]. One possible cause of this proliferation may be the vertical relations (i.e., rigid top-down hierarchy and paternalism) that still dominate Chinese business organizations, which may hamper workers' ability to organize their own daily activities and stiffen self-initiative, which would make the job less interesting. At the same time, however, as S1 Table shows, the share of workers who value the importance of job security (95%) and high income (94%) is larger than the share that values job interest (82%). Thus, Chinese workers, unlike those in our country clusters, value job security and a good income more than having an interesting job, implying that they would rather sacrifice personal interest for a well-paid, guaranteed position. It is therefore not surprising that most young people attending college in China today choose their majors based mainly on future job security and income considerations, and less on intrinsic interest [73].
This importance that Chinese workers place on a well-paying job, which is higher than in most Western countries, generates a second driver of dissatisfaction, the tendency for workers to judge their own current wage as inadequate. In fact, according to CLDS data, the financial aspect has become the most important job characteristic in China [27], an observation that totally contradicts the widely held belief that earnings are less of an intrinsic motivator in Confucian societies. Of course, the per capita annual disposable income of residents in China is approximately 21,966yuan (equivalent to 3,527US$) in 2015, which is indeed lower than in most developed countries [74]. Nonetheless, individuals in all countries tend to assess their own incomes relative to those of their peers, which, given the dramatic increase in income disparity at various levels, could be contributing to the relatively low income satisfaction [75].
A third reason for job dissatisfaction identified by our decomposition analysis is the perception of relatively poor advancement opportunities, which is particularly pronounced in China, even though the amount of importance attributed to it differs little from that in other countries. Yet despite the importance attributed to advancement opportunities, only about 1 in 5 workers reports to having a job that actually offers such development perspectives (S2 Fig) .
Even though these unmet expectations for an interesting well-paid job with attractive advancement opportunities can explain part of the job satisfaction gap between China and the other countries, however, a significant part remains unexplained. One briefly mentioned social aspect that should be emphasized here is that ways of responding to subjective questions on well-being may be culturally specific, making the Chinese workers' low job satisfaction ranking no more than an artefact unassociated with actual job characteristics. Although we cannot refute this argument, which is seemingly supported by the considerable share of the satisfaction gap that our variables cannot explain, the markedly higher levels of job satisfaction reported by Taiwanese workers (a frequent proxy for the Chinese because of a common language and Confucian philosophy) is compulsive evidence against it. In fact, the difference in average job satisfaction between Taiwanese and Chinese workers of over 0.4 points is substantial (see Table 1).
Our results do make a useful contribution to the economic convergence or divergence literature [76][77][78], which examines whether, as economies develop, work attitudes converge irrespective of cultural context into a universal stance or whether underlying values and belief systems engender significant differences in employee expectations and attitudes. Our results provide ample evidence for convergence in that Chinese workers attribute similar importance to most job attributes as workers in other countries (S1 Fig) . Interestingly, one of the few notable intercountry differences concerns income, with Chinese workers placing more importance on a well-paying job than their Western counterparts. Nonetheless, even though Chinese workers expect an interesting job, higher pay, and advancement opportunities, this expectation stems less from differing work attitudes or values than from perceptions of what the current job offers. This convergence is further underscored by the importance of developing good relations with coworkers, deemed as important in China as elsewhere despite a lower probability of Chinese workers having a job that allows such development. In fact, relationships with both colleagues and employers in China are not as good as those reported in all three clusters (S3 Fig), a somewhat surprising finding given the group orientation and participative decisionmaking encouraged by China's collectivistic society.
Finally, cross-national studies such as ours are invaluable, "even indispensable," to valid interpretation and generalizability of findings from research that, like the job satisfaction literature, tends to focus on Western countries and test assumptions specific to a single culture or society. Not only does cross-national investigation ensure that "social structural regularities are not mere particularities, the product of some limited set of historical or cultural or political circumstances," it also forces researchers to "revise [their] interpretations to take account of cross-national differences and inconsistencies that could never be uncovered in single-nation research" (p. 77). [79]
---
The data underlying the results presented in the study are available from http://issp.org/menu-top/home/.
---
Supporting information
S1
Writing -original draft: Xing Zhang, Peng Nie, Alfonso Sousa-Poza.
Writing -review & editing: Alfonso Sousa-Poza. | 35,591 | 808 |
e99ec9468a1867b450504f9f360ee2ecbf67457d | Factors affecting the use of prenatal care by non-western women in industrialized western countries: a systematic review | 2,013 | [
"Review",
"JournalArticle"
] | Background: Despite the potential of prenatal care for addressing many pregnancy complications and concurrent health problems, non-western women in industrialized western countries more often make inadequate use of prenatal care than women from the majority population do. This study aimed to give a systematic review of factors affecting non-western women's use of prenatal care (both medical care and prenatal classes) in industrialized western countries. Methods: Eleven databases (PubMed, Embase, PsycINFO, Cochrane, Sociological Abstracts, Web of Science, Women's Studies International, MIDIRS, CINAHL, Scopus and the NIVEL catalogue) were searched for relevant peerreviewed articles from between 1995 and July 2012. Qualitative as well as quantitative studies were included. Quality was assessed using the Mixed Methods Appraisal Tool. Factors identified were classified as impeding or facilitating, and categorized according to a conceptual framework, an elaborated version of Andersen's healthcare utilization model. Results: Sixteen articles provided relevant factors that were all categorized. A number of factors (migration, culture, position in host country, social network, expertise of the care provider and personal treatment and communication) were found to include both facilitating and impeding factors for non-western women's utilization of prenatal care. The category demographic, genetic and pregnancy characteristics and the category accessibility of care only included impeding factors. Lack of knowledge of the western healthcare system and poor language proficiency were the most frequently reported impeding factors. Provision of information and care in women's native languages was the most frequently reported facilitating factor.The factors found in this review provide specific indications for identifying non-western women who are at risk of not using prenatal care adequately and for developing interventions and appropriate policy aimed at improving their prenatal care utilization. | Background
Prenatal care has the potential to address many pregnancy complications, concurrent illnesses and health problems [1]. An essential aspect of prenatal care models concerns the content of prenatal care, which is characterized by three main components: a) early and continuing risk assessment, b) health promotion (and facilitating informed choice) and c) medical and psychosocial interventions and follow-up [2,3]. Another essential aspect of prenatal care models concerns the number and timings of prenatal visits. While there is overall agreement on the importance of early initiation of prenatal care, the number of prenatal visits has led to a great deal of discussion. A Cochrane review of ten RCTs among mostly low-risk women concluded that the number of prenatal visits could be reduced without increasing adverse maternal and perinatal outcomes, although women in developed countries might be less satisfied with this reduced number of prenatal visits [4].
Despite universal healthcare insurance coverage in most industrialized western countries, studies in these countries have shown that non-western women make inadequate use of prenatal care. They are less likely to initiate prenatal care in good time [3,[5][6][7], attend all prenatal care appointments [8] and attend prenatal classes [9]. Furthermore, non-western women have also been shown to be at increased risk for adverse perinatal outcomes. A meta-analysis by Gagnon et al. showed that Asian, North African and sub-Saharan African migrants were at greater risk of feto-infant mortality than 'majority' populations in western industrialized countries, with adjusted odds ratios of 1.29, 1.25 and 2.43 respectively. This study also found that Asian and sub-Saharan African migrants are at greater risk of preterm birth, with adjusted odds ratios of 1.14 and 1.29 respectively [10]. Besides an increased risk for adverse perinatal outcomes, non-western women are also at increased risk of adverse maternal outcomes, in terms of both mortality [11,12] and morbidity [13].
A few studies have implied a relationship between non-western women's higher risk of adverse pregnancy outcomes and their use of prenatal care. In a Dutch study conducted by Alderliesten et al., late start of prenatal care was one of the maternal substandard care factors of perinatal mortality that were more common among Surinamese and Moroccan women [14]. In a French study conducted by Philibert et al., the excess risk for postpartum maternal mortality among nonwestern women was associated with a poorer quality of care, suggesting attention should be paid to early enrolment in prenatal care [15]. This relationship emphasizes the importance of proper use of prenatal care to address pregnancy complications, concurrent illnesses and health problems.
Two previously conducted reviews provide relevant insights into the factors affecting prenatal care utilization [16,17]. The first review focused on women, irrespective of origin, in high-income countries. Ethnicity, demographic factors, socioeconomic factors at the individual and neighbourhood level, health behaviour and provider characteristics were found to be determinants of inadequate prenatal care utilization [16]. The second review focused on first-generation migrant women of western and non-western origin in western industrialized countries. In this review, being younger than 20, poor or fair language proficiency and socioeconomic factors were reported to affect prenatal care utilization [17].
A review specifically focused on factors affecting prenatal care utilization by non-western women, irrespective of generation, was still lacking. Furthermore, qualitative studies -, which are well suited to exploring the experiences and perceptions that play a role in women's prenatal care utilization -were not included in previously conducted reviews. Also, these reviews were not restricted to countries with similar accessibility to healthcare, which complicates generalization of the results found. In this review, we therefore aimed to identify and summarize all reported factors, irrespective of study design, affecting non-western women's use of prenatal care and prenatal classes in industrialized western countries with universal insurance coverage. Prenatal (or antenatal) care was defined as all care given by professionals to monitor women's pregnancy. All courses preparing pregnant women for birth or teaching them how to feed and take care of their baby were defined as prenatal or antenatal classes. 'Factors' were defined as all experiences, needs, expectations, circumstances, characteristics and health beliefs of non-western women.
---
Methods
---
Search strategy
The following databases were searched: PubMed, Embase, PsycINFO, Cochrane, Sociological Abstracts, Web of Science, Women's Studies International, MIDIRS, CINAHL, Scopus and the NIVEL catalogue. The search was limited to articles published between 1995 and July 2012.
The search strategy consisted of a number of Medical Subject Headings (MeSH) terms and text words, aiming to include as many relevant papers as possible (Additional file 1). It was devised for use in PubMED, and was adapted for use in the other databases. The search was performed in all fields of PubMed (the main database) and in titles, abstracts and keywords for the other databases. No language restriction was applied.
---
Methods of screening and selection criteria
The initial screening of articles was based on titles, and the second based on titles and abstracts. Finally, the full texts of the articles were assessed for inclusion. Screening was done by five reviewers (WD, AF, TW, JM, AB). Each article was screened by two reviewers: one of the first four reviewers plus the fifth reviewer. For each article, any discrepancy between the two reviewers was resolved through discussion.
The aim was to identify studies analysing or exploring factors affecting the use of prenatal care by non-western women in industrialized western countries. We therefore included studies if they (a) concerned prenatal care; (b) concerned factors affecting the use of prenatal care; (c) did not concern specific diseases during prenatal care, with the exception of pregnancy-related or postpartum conditions; (d) concerned industrialized western countries (high-income OECD countries except for Japan and Korea) with universal insurance coverage (resulting in exclusion of the USA); (e) concerned non-western women as clients (women from Turkey, Africa, Latin-America, Asia), with results presented at subgroup level; (f) did not concern illegal immigrants, refugees, asylum seekers, students or migrant farm workers (seasonal workers, internal migration); (g) were based on primary research (qualitative, quantitative, mixed methods or case studies).
We have used the term 'non-western' women to mean immigrant women from the countries mentioned above, as well as their (immediate) descendants. Studies focusing on women from non-migrant ethnic minority groups (e.g. Aboriginals) were excluded.
In the first two screening stages (titles and titles plus abstracts), studies were included when both reviewers agreed they were eligible for inclusion, or if there was doubt about whether or not to exclude them. In the final screening stage (full texts), studies were included when both reviewers felt they met all the inclusion criteria.
---
Data extraction and quality appraisal
The following information was abstracted from the included studies:
(a) general information: authors, journal, publication date, country, language; (b) research design: qualitative, quantitative or mixed-methods design; (c) research population: ethnic group, immigrant generation, sampling method, sample size; (d) analytical approach; (e) all possible factors affecting the use of prenatal care; (f ) results and conclusions.
The quality of the studies was assessed by two reviewers, using the Mixed Methods Appraisal Tool (MMAT-version 2011) [18]. This quality appraisal tool seems appropriate as it was designed to appraise complex literature reviews consisting of qualitative, quantitative and mixed-methods studies. Quantitative and qualitative studies are each appraised by four criteria with overall scores varying from 0% (no criterion met) to 100% (all four criteria met). For criteria partially met, we decided to give half of the criterion score. For mixed methods studies, three components are appraised: the qualitative component, the quantitative component and the mixed methods component. The overall score is determined by the lowest component score.
---
Synthesis
Because of the heterogeneity in terms of countries, nonwestern groups and methods of analysis, we chose not to conduct a meta-analysis for the quantitative results. Instead, we chose to produce a narrative synthesis of the results of the studies included. For that synthesis, we used the conceptual framework of Foets et al. 2007, an elaborated version of Andersen's healthcare utilization model (Figure 1) [19]. As this conceptual framework integrates the possible explanations for the relationship between ethnicity and healthcare use, it seemed the most appropriate. In this elaborated model the predisposing, enabling and need factors of Andersen are explained by two groups of underlying factors: individual factors and health service factors. The individual factors are subdivided into several categories: demographics and genetics, migration, culture, the position in the host country and social network. The health service factors are subdivided into: accessibility, expertise, personal treatment and communication, and professionally defined need. To fit the factors emerging from the data extraction, the category "demographics and genetics" was expanded to include pregnancy. This finally resulted in the following categories:
Individual factors 1) Demographics, genetics and pregnancy: women's age, parity, planning and acceptance of pregnancy, pregnancy related health behaviour and perceived health during pregnancy 2) Migration: women's knowledge of/familiarity with the prenatal care services/system, experiences and expectations with prenatal care use in their country of origin, pregnancy status on arrival in the new industrialized western country 3) Culture: women's cultural practices, values and norms, acculturation, religious beliefs and views, language proficiency, beliefs about pregnancy and prenatal care 4) Position in the host country: women's education level, women's pregnancy-related knowledge, household arrangement, financial resources and income 5) Social network: size and degree of contact with social network, information and support from social network
Health service factors 6) Accessibility: transport, opening hours, booking appointments, direct and indirect discrimination by the prenatal care providers 7) Expertise: prenatal care tailored to patients' needs and preferences 8) Treatment and communication: communication from prenatal care providers to women, personal treatment of women by prenatal care providers, availability of health promotion/information material, use of alternative means of communication 9) Professionally defined need: referral by general practitioners and other healthcare providers to prenatal care providers
---
Results
A total of 11954 articles were initially identified, of which 4488 were duplicates. Title screening of the remaining 7466 non-duplicate references resulted in 1844 relevant articles being selected for abstract screening. After abstract screening, 333 articles were selected for full text screening, either because they were relevant (230) or no abstract was available (103). Finally, full text assessment resulted in 16 peer-reviewed articles being included and their methodological quality being assessed (Figure 2).
---
Characteristics of the included studies
Additional file 2 provides an overview of the articles included. Three articles described quantitative observational studies: 2 cohort studies [20,21] and 1 crosssectional study [22] with methodological quality scores varying between 75% and 100%. Twelve articles described qualitative studies: seven individual interview studies [23][24][25][26][27][28][29], two focus group studies [30,31], two studies combining individual interviews and focus group interviews [32,33] and one study combining individual interviews and observations [34]. The methodological quality scores of eleven of these twelve qualitative studies varied between 50% and 100%, with the twelfth study scoring 25%.
One study used mixed methods -combining a retrospective cohort design with focus groups [35]. Only the focus group yielded relevant information for this review. The methodological quality score of this study was 25%.
The studies were conducted in various industrialized western countries. Nine studies were conducted in a European country [20,21,23,28,29,[31][32][33]35], four in Canada [22,25,27,30] and three in Australia [24,26,34].
Fourteen articles were published in English [20][21][22][24][25][26][27][28][29][30][31][32][33][34], one in German [23] and one in Italian [35].
The studies included women from different regions of the world. Three studies reported factors for sub-Saharan African women: Somali or Ghanaian [29,32,33]; eight for Asian women: South Asian [22], Sri Lankan [23], Filipino [26], Vietnamese [27], Indian [30], Thai [34] or a mixture of Asian origins [24,28]; and two for Turkish women [21,31]. One study reported factors for Muslim women not further specified [25].
Some studies reported factors for various non-western ethnic groups. One study reported factors for sub Saharan African women (Ghanaian), North African women (Moroccan), Turkish women and other nonwestern women not further specified [20]. Another study reported factors for North African women (Northwest African women) and Asian women (Chinese) as part of a group of migrant women [35].
---
Barriers to prenatal care utilization
All factors impeding the use of prenatal care were classified as barriers. The first column of Table 1 gives an overview of these factors according to the conceptual framework of Foets et al.. Both quantitative and qualitative studies reported factors impeding non-western women's use of prenatal care. Demographic, genetic and pregnancy-related factors were only described in one quantitative study and in none of the qualitative studies. In this study multiparity, being younger than 20 and unplanned pregnancy were associated with late prenatal care entry [20].
On the other hand, expertise factors as well as personal treatment and communication factors were only described in qualitative studies. Care providers with a lack of knowledge of cultural practices were described as being unable to provide knowledgeable health guidance and more likely to display insensitive behaviour [25]. Interviews with caregivers revealed that Somali women perceiving themselves as being treated badly by a care provider would not return for antenatal care [33]. Poor communication complicated women's access to prenatal care [35], prevented attendance of prenatal classes [23] and was reported as an underlying problem in understanding maternity reproductive services [32].
Factors reported in both qualitative and quantitative studies concerned: migration, culture, position in the host country, social network and accessibility of prenatal care. -Migration-related factors: For Asian, Somali and Turkish women, as well as Muslim women otherwise unspecified, lack of knowledge of or information about the Western healthcare system was reported to deter utilization of prenatal care [26,27,[30][31][32]35] or prenatal classes [22,23,25].
Arriving in the new country late in pregnancy was reported as another reason for not attending prenatal classes [22]. -Cultural factors: Adherence to cultural and religious practices was reported to impede prenatal care utilization by Asian and Muslim women. Women Table 1 Overview of the factors according to the conceptual framework of Foets et al.
---
Category Barriers Facilitators
---
Individual factors
---
Demographics, genetics and pregnancy
Being younger than 20 [20]* Multiparity [20]*
Unplanned pregnancy [20]*
---
Migration
Lack of knowledge of or information about the Western healthcare system [22,23,[25][26][27][30][31][32]35] Recognition of prenatal care as an important issue in the community [30] •
Arriving in the new country late in pregnancy [22]*
---
Culture
Adherence to cultural and religious practices [23,25,34] • Care provider of the same ethnic origin [27] •
Poor language proficiency [20,22,24,26,27,30,31] Belief that prenatal care ensures baby's well-being [23,34] • Lack of assertiveness [24] • Belief in looking after your own health for a healthy baby [34] • Dependency on husband [22,34,35] Perceiving pregnancy as a normal state [29] • Belief that prenatal care is more a burden than a benefit [25] • Belief that prenatal classes are not necessary [22,34]
---
Position in host country
Financial problems [22,23,31] Better socio-economic follow-up [31] •
---
Unemployment [21]*
Low or intermediate educational level [20,21]* Social inequality (education, economic resources and residence (rural or urban)) [35] • Lack of time [22,23,27,30] Lack of childcare [23,25] •
No medical leave from work [31] •
---
Social network
No support from family [35] • Husband with a good command of the industrialized country's official language [34] • Acquiring or following advice from family and friends [22,23] Isolated community [35] •
---
Health service factors
---
Accessibility
Inappropriate timing and incompatible opening hours [23,35] •
Transport and mobility problems [22,26,27,35] Indirect discrimination [32] •
---
Expertise
Care provider lacking knowledge of cultural practices [25] •
A mature, experienced healthcare provider with a command of the native language [30] •
Care provider showing interest and respect [23] •
Care provider alleviating worries and fears [23] •
---
Personal treatment and communication
Poor communication [23,32,35] entered prenatal care late because of shame about being undressed during consultations [23]. Prenatal classes were not attended because of feelings of fear and embarrassment about watching a video of the act of giving birth [34] and because classes were not exclusively designed for women [25]. Poor language proficiency was another cultural characteristic described as an impeding factor for prenatal care [20,22,24,27,30,31] and prenatal classes [26]. Lack of assertiveness appeared to make it difficult for Asian women to access maternity services and information. These women were too reluctant or ashamed to enquire about services or ask for information [24]. Dependency on the husband was described as complicating access to both prenatal care [35] and prenatal classes [22,34]. Pregnancy was perceived as a normal state by Somali women and some of them therefore did not understand the necessity of prenatal care [29]. Prenatal care was perceived as a burden more than a benefit because the same procedure is performed every time and doctors are too busy to provide pregnancy-related information [25]. Prenatal classes were perceived as not being necessary as women had already experienced birth [22,34] or attended classes previously [22]. -Factors related to women's position in the host country: Financial problems impeded the ability to pay for health insurance [31], access to medical care during pregnancy [22] and attendance of prenatal classes [23]. Unemployment was another characteristic that was identified. In a Dutch study, enabling factors (including being in employment) explained Turkish women's delayed entry into prenatal care [21]. In two studies, low or intermediate educational level was associated with late entry into prenatal care [20,21]. Social inequalities in education, economic resources and residence (rural or urban) among those who have immigrated, were found to affect access to prenatal care [35]. Lack of time was reported as a reason for not attending prenatal classes [22,23,30] and as a barrier to accessing prenatal support from public health and community nurses [27]. Another reason for not attending prenatal classes was lack of childcare [23,25]. Turkish women in a Swiss study reported problems obtaining medical leave from work [31]. -Social network factors: Little or no support from family was described as complicating access to prenatal care [35]. Acquiring or following advice from family and friends was reported as a reason for not attending prenatal classes [22,23]. Isolation of the community was described as complicating Chinese women's access to prenatal care [35].
-Accessibility factors: Inappropriate timing was reported as a reason for not attending prenatal classes [23] while incompatible opening hours (incompatible with women's own working hours or those of their husband or accompanying persons) were reported to affect their access to prenatal care [35]. Transport and mobility problems were reported to complicate access to medical care during pregnancy [22], prenatal care [35] and prenatal classes [26,27]. Indirect discrimination also affected access to care. Somali women in a UK study reported that general practitioners would sometimes refuse to see them if they did not bring along an interpreter, and that they had to book appointments for secondary care three days in advance if interpretation services were needed [32].
---
Facilitators of prenatal care utilization
All factors facilitating the use of prenatal care were classified as barriers. The second column of Table 1 gives an overview of these factors according to the conceptual framework of Foets et al.. These factors were only reported in qualitative studies and concerned: migration, culture, socioeconomic status, social network, treatment and communication.
-Migration-related factors: To improve prenatal class attendance, women suggested recognition of prenatal care as an important issue in the community through mobilisation within their communities by word of mouth, radio and television [30]. -Cultural factors: Women felt that prenatal support provided by health workers or peers of the same ethnic origin would be beneficial to them [27].
Believing that prenatal care ensures babies' wellbeing was another characteristic that facilitated prenatal care utilization. In one study, prenatal care was perceived as an important aspect of pregnancy that could assure women about their babies' wellbeing [34], while in another study regular consultations reduced women's uncertainty or fear about the pregnancy or their babies' health [23].
Believing in looking after your own health for a healthy baby was also described as a reason for not missing any prenatal check-ups [34]. -Factors related to women's position in the host country: Women suggested better socioeconomic follow-up by institutions because socioeconomic conditions affected their ability to pay for health insurance [31]. -Social network factors: Women with a husband who spoke the industrialized country's official language reported that their husbands told them to attend antenatal check-ups and arranged antenatal care because they did not speak the country's language themselves [34]. -Expertise factors: Women recommended that healthcare providers facilitating prenatal care sessions should be mature women with experience of childbirth [30]. Care providers were expected to show respect by being interested and allowing for women's sense of shame about nudity [23]. They were also expected to alleviate worries and fears by giving women a sense of security through careful monitoring, assessment, supervising and by acknowledging women's fears and reassuring them [23]. -Personal treatment and communication factors: One of these factors was the use of women's native language. Women proposed more information in their native language [31], prenatal classes being conducted in their native language [27] and healthcare providers with a command of their native language [30]. Group prenatal care was described as being more accessible when practice midwives spoke several community languages [28]. Another characteristic was improved communication. Care providers or institutions were expected to provide translation [23,31], conversation space [23], and to make up for women's experience and knowledge by asking specific questions and giving customized information, demonstrations and explanations [23].
In one study, women reported a preference for audio-visual material over written information [27]. Women explained that the term "classes" suggests that they are ignorant about childbirth, and that prenatal classes should be called prenatal sessions to improve their attendance [30].
---
Discussion
---
Factors affecting prenatal care utilization
This review gives an overview of factors affecting nonwestern women's use of prenatal care in western societies. Therefore, 'factors' were described in the broadest sense, comprising experiences, needs and expectations, circumstances, characteristics and health beliefs of non-western women. The results indicate that non-western women's use of prenatal care is influenced by a variety of factors, and that several factors may simultaneously exert their effect. The categories migration, culture, position in the host country, social network, expertise of the care provider and personal treatment and communication were found to include both facilitating and impeding factors for nonwestern women's prenatal care utilization. The category demographics, genetics and pregnancy and the category accessibility of care only included impeding factors. The only aspect of the conceptual framework of Foets et al. that was not found in the studies included in this review was 'professionally defined need'.
In a systematic review conducted by Feijen-de Jong et al., ethnic minority was found to be one of the determinants of inadequate prenatal care utilization in high income countries [16]. As ethnic minority status does of itself not explain prenatal care utilization, our review adds relevant information to the review by Feijen-de Jong and colleagues, and gives more insight into the factors behind these women's prenatal care utilization, at least for those of non-western origin. The demographic and socioeconomic factors found in our review are largely in line with the results of Feijen-de Jong et al.. However, we did not find any factors concerning pattern or type of prenatal care, planned place of birth, prior birth outcomes and health behaviour. Our results are also in line with the review by Heaman et al., who reported that demographic, socioeconomic and language factors affected prenatal care utilization by first generation migrant women [17]. In addition to these two reviews, we found several other factors at the individual and health service levels that impeded or facilitated nonwestern women's prenatal care utilization.
To our knowledge, this is the first review of prenatal care utilization by non-western women that has combined quantitative, qualitative and mixed-methods studies. By doing this, we were able to find a very wide range of factors affecting non-western women's prenatal care utilization. This is clearly evident from the barriers. A comparison shows that the quantitative studies made a full contribution to inadequate users' demographic, genetic and pregnancy characteristics. All three factors in this category: namely being younger than 20, multiparity and unplanned pregnancy were derived from one quantitative study. The qualitative studies contributed fully to expertise factors as well as personal treatment and communication factors. Care providers lacking knowledge of cultural practices, poor communication and perceiving yourself as having been badly treated by a care providers were only derived from qualitative studies and the qualitative part of the mixed methods study. Besides providing all the barriers in a specific category, quantitative and qualitative studies also complemented each other by both providing barriers in the same category (migration, culture, position in the host country, social network, accessibility), sometimes even by means of the same barrier. The factors: lack of knowledge of or information about the Western healthcare system, poor language proficiency, dependency on husband, belief that prenatal care is not necessary, financial problems, lack of time, acquiring or following advice from family and friends, and transport and mobility factors were all reported in quantitative as well as qualitative studies.
By combining different study designs, we were also able to provide more in-depth insight into the mechanisms of some factors. For instance, we obtained more insight into the mechanisms of the factor multiparity reported in two previous quantitative studies. Qualitative studies showed that multiparous women did not perceive prenatal classes as necessary because they had already given birth. Furthermore, multiparous women reported lack of childcare as a reason for not attending prenatal classes. Perhaps these two reasons also play a role in multiparous women's utilization of medical care during pregnancy.
In the introduction, non-western women's risk for adverse pregnancy outcomes was described according to region of origin. By placing this review's findings in a regional perspective, some noteworthy insights were gained about factors affecting these high risk groups' health care utilization. As to individual barriers, lack of knowledge of the Western healthcare system was described among all four regional groups distinguished in this review (sub-Saharan African, North African, Asian and Turkish). Health beliefs were reported among sub-Saharan African (Somali) and Asian women. Dependency on husband was reported among Asian and North African women. However, adherence to cultural practices, acquiring or following advice from family and friends, lack of assertiveness and lack of time were only described in studies conducted among Asian women. As to health service barriers, accessibility factors were reported in studies conducted among Asian and North African woman. On the other hand, expertise and personal treatment factors were only found among sub-Saharan African (Somali) women.
These insights can be used to develop a more targeted approach towards specific groups. For example by placing emphasis on 'dependency on husband' for Asian and North African women, and 'personal treatment' for sub Saharan women. However, this should be done carefully. Some factors may seem to play no role for certain ethnic groups, while they were simply not included or discussed in these studies.
The individual and health service facilitators were all derived from qualitative studies conducted among Asian women and Turkish women. Nevertheless, these facilitating factors can be applicable to other ethnic groups, as they relate to difficulties also reported by these groups (e.g. improved communication).
Several factors such as lack of knowledge or information of the western healthcare system, poor language proficiency and poor communication applied to women of various ethnic origins. On the other hand, some factors were highly specific to a country, culture or religion. Muslim women, for example, were found to refuse combined session with males while other women might have fewer gender issues. Extrapolation of the results is therefore less applicable. The factors reported to facilitate prenatal care utilization were mostly suggestions made by women. As women based these suggestions on their own experiences with prenatal care, we decided to include these in our review.
In a systematic review conducted by Simkahada et al., perceiving pregnancy as a normal state and seeing little direct benefit from antenatal care were reported as barriers to antenatal care utilization in developing countries [36]. In our review, we found somewhat similar impeding beliefs about prenatal care in two studies conducted among first generation women. Furthermore, Simkhada and colleagues reported unsupportive family and friends as a barrier to antenatal care utilization which was also found in our review. These similarities between nonwestern women in industrialized western countries and women in developing countries indicate that some women seem to continue to have certain beliefs, attitudes and needs they had prior to migration. A comparison between first and second generation non-western women would be very useful, but was not possible. Only one study included second generation women but presented the results in combination with first generation women.
Even though we included only high-income countries with universally accessible healthcare, we found that financial factors did affect non-western women's prenatal care utilization. One explanation for this finding might be that women may not be aware of the universal accessibility of care, and therefore perceive lack of money as a barrier to prenatal care. It might also be that, even though women are currently legally resident (which was an inclusion criterion of our review), they reflect back on periods when this was not the case.
---
Methodological reflections
One noteworthy point is the large number of qualitative studies included in this review, as compared to quantitative studies. During the review process, we identified several quantitative studies focusing on factors affecting prenatal care utilization by non-western women among their study population. Regrettably, we had to exclude most of these studies as they lacked a sub-analysis specifically for non-western women. By doing a sub-analysis specifically for non-western women in future quantitative studies on prenatal care utilization, more insights can be gained on factors affecting their use of prenatal care.
The studies included in this review all considered different subgroups of non-western women. However, the immigrant generation of the women was not reported in five studies and factors were not specified according to generation in the only study that included first and second generation women.
The factors found in the qualitative studies were mostly part of women's experiences, needs and expectations with prenatal care. These studies did not specifically focus on inadequate users, and therefore did not include a definition. On the contrary, two of the three quantitative studies defined inadequate use, but did so differently (Additional file 3). This difference in definition between the quantitative studies and the lack of definition in qualitative studies complicates comparison and integration of the study results.
The included studies showed a large variance in methodological quality. Nevertheless, we decided not to exclude studies with a low quality score, in order to prevent loss of any relevant factors in this review. Instead we compared the results of the high and low methodological quality studies against each other, and did not find any contradictory results.
Two main strengths of this study are the use of a broad search string and not applying a language restriction, to minimize the chance of missing relevant studies. Also the inclusion of quantitative, qualitative and mixedmethods studies adds to the strength, as this increases the chance of finding different types of relevant factors affecting prenatal care utilization. Another strength is the restriction to countries with universally accessible healthcare. Therefore, results are more comparable and generalizable to other countries with a similar organization of their healthcare system. The use of a theoretical framework to sort the factors found is another strength of the study, as this gives a clear overview of the factors and the level at which they exert their effect.
---
Conclusions
Sixteen studies heterogeneous in methodological quality were included in this review. A variety of factors at the individual and health service levels were found to affect non-western women's use of prenatal care. Lack of knowledge of the western healthcare system and poor language proficiency were the most frequently reported impeding factors, while provision of information and care in women's native language was the most frequently reported facilitating factor. The factors found could all be classified according to the conceptual framework of Foets et al., and covered all categories with the exception of 'professionally defined need'.
The factors reported were mainly derived from qualitative studies, and more detailed quantitative research with sub-analyses for non-western women is needed to determine the magnitude of these factors' effects on prenatal care utilization. Furthermore, more qualitative studies specifically aimed at non-western women making inadequate use of prenatal care are necessary.
The factors found in this review provide specific indications for identifying non-western women at risk of inadequate use of prenatal care, and developing interventions and adequate policy aiming at improving their prenatal care utilization.
---
Additional files
Additional file 1: Search strategy in PubMed.
Additional file 2: Overview of the study characteristics.
Additional file 3: Additional information of the included studies.
---
Competing interests
The authors declare that they have no competing interests.
---
Authors' contributions
All authors have made substantial contributions to this study. AB and WD developed the review with the support of TW, JM and AF. AB conducted the search, and all authors contributed to the screening, data extraction and quality assessment. The final version of the manuscript was read and approved by all authors. | 37,827 | 2,015 |
d13962a9d4e2bdfc1d0b6c05a133066f6b69dabf | The Effect of Social Media Use and Environment on Mental Health Among Young People in Sukabumi | 2,023 | [
"JournalArticle",
"Review"
] | This study looked into how social media use and outside influences affected young people's mental health in Sukabumi, Indonesia. 400 young individuals between the ages of 18 and 25 participated in a cross-sectional survey in which information on social media use, environmental exposure, and mental health outcomes (such as depression, anxiety, and stress) was gathered. According to the findings, increased social media use was linked to greater levels of stress, anxiety, and depression, but exposure to environmental elements including noise, air pollution, and green open spaces was found to be a significant predictor of mental health outcomes. In particular, increased exposure to green space was linked to lower levels of sadness, anxiety, and stress whereas higher exposure to air and noise pollution was linked to higher levels of these emotions. Gender was also found to be a significant predictor, with women reporting higher levels of depression, anxiety, and stress than men. These findings highlight the importance of considering the role of social media use, environmental factors, and gender in understanding mental health outcomes among young people in Sukabumi. Interventions aimed at promoting mental health among young people should consider social media use, environmental factors, and gender-related factors. Limitations of the study include a cross-sectional design and limited generalizations to other populations. | INTRODUCTION
Mental health problems are prevalent among young people, and preventive approaches have gained traction to improve their mental health [1]- [3]. Adolescents are particularly vulnerable to mental health difficulties, and there are barriers to support, including capacity difficulties, stigma, and lack of tailored services [4] Research shows weaknesses in young people's knowledge and beliefs about mental health and mental health support, as well as the accumulation of stigmatizing attitudes historically. The lack of research is also present in young people's desire for support [5] Preventive psychiatry is a potential transformative strategy to reduce the incidence of mental disorders in young people [6], [7] Selective approaches mostly target familial vulnerability and exposure to non-genetic risks. Selective screening and psychological/psychoeducational interventions in vulnerable subgroups may improve anxiety/depression symptoms, but their effectiveness in reducing the incidence of psychotic/bipolar/general mental disorders has not been proven [8]. Psychoeducational interventions can universally improve anxiety symptoms but do not prevent depression/anxiety disorders, while physical exercise universally can reduce the incidence of anxiety disorders [4].
The COVID-19 pandemic has highlighted the link between education and health, and school closures are most likely associated with significant health disruptions for children and adolescents [9], [10]. A systematic review [11] of the available evidence is needed to inform policy decisions regarding school closures and reopenings during the pandemic. The review found that mental health was significantly impacted by school closures, with 27 studies identifying a considerable impact. A growing number of digital health treatments have been created to address a variety of mental health disorders. Digital health technologies are seen as promising for treating mental health among adolescents and young people. In comparison to usual care or inactive controls, a systematic review [12] of recent evidence on digital health interventions aimed at adolescents and young people with mental health conditions found that they were effective in addressing mental health conditions. However, the quality of evidence is generally low, and there is a lack of evidence on the costeffectiveness and generalizability of interventions to low-resource settings.
According to a systematic review [12] it is estimated that 1 in 5 adolescents experience a mental health disorder each year. The most common mental health problems studied in young people are depression and difficulties related to mood, anxiety, and social/behavioral problems [4] The study [12] also found that digital health technologies are considered promising for addressing mental health among adolescents and young people, and there are a growing number of digital health interventions targeting this population. Preventive approaches have gained traction for improving mental health in young people, and there is evidence supporting primary prevention of psychotic, bipolar, and general mental disorders as well as good promotion of mental health as a potential transformative strategy to reduce the incidence of these disorders in young people [4].
In a review of coverage [13] the most common mental health problems investigated in adolescents with physical disabilities beginning in childhood were depression and difficulties related to mood, anxiety, and social/behavioral problems. Adolescents believe that mental health concerns are a typical occurrence, and the rise in these issues is linked to pressures regarding academic success, social media, and more candor regarding mental health issues. [14] Prejudices, preconceptions, hearsay, and gender standards are all regarded as significant risk factors for mental health issues. Prejudice towards persons with mental health issues is thought to stem from ignorance [15], [16].
In young Australians, harmful alcohol use is linked to mental health issues and other risky behaviors. [17] Indonesia is also experiencing issues with the rise of mental health issues among young people brought on by expectations connected to academic success, social media, and more candor regarding mental health issues [10], [18], and [19]. Prejudice, stereotyping, and gender norms are all key risk factors for mental health issues [14] Having a physical disability during adolescence and young adulthood increases the risk of developing mental illness [13] Selective approaches mostly target familial vulnerability and non-genetic risk exposure, while universal psychoeducational interventions can improve anxiety symptoms but do not prevent depression/anxiety disorders [20] Approaches that target school climate or social determinants of mental disorders have the greatest potential to reduce the risk profile of the population as a whole [4] Social media can offer a space for people to share stories of times they are experiencing difficulties and seek support for mental health issues [21]. However, the use of social media can cause young people to experience conditions such as anxiety, stress, and depression [22] The detrimental effects of social media use on young people's mental health can be caused by a variety of factors, including increased screen time, cyberbullying, and social comparison [23] The impact of COVID-19 on young people's mental health has also been discussed in relation to social media use [9] Mental health practitioners have recommended that the use of digital technology and social media be explored routinely during mental health clinical consultations with young people [12] It is important to identify barriers to effective communication and examples of good practice in talking about young people's web-based activities related to their mental health during clinical consultations.
Several studies have demonstrated that using social media can have a detrimental effect on mental health, particularly for those who spend more than 2 hours per day on social networking sites, including depression, anxiety, and suicidal ideation [24], [25]. Bullying on social media can also fuel its development and fuel sadness [26] The usage of social media, however, has been linked to positive effects on young people's mental health, including social support and a decrease in feelings of loneliness [25]. There is continuous study into the connection between highly visual social media and young people's mental health, but the results are conflicting and there are few studies that focus just on highly visual social media [21], [27] Schools, parents, social media and advertising companies, West Science Interdisciplinary Studies and governments have a responsibility to protect children and adolescents from harm and educate them on how to use social media safely and responsibly [28].
Parents can educate their children on how to use social media safely and responsibly, including setting boundaries such as limiting access to technology in bedrooms and at mealtimes [29] Parents can also be good role models by ensuring that they do not allow excessive use of social media and modeling positive habits [30] In addition, parents can help their children choose reputable sources of support that can be accessed through social media, such as groups for parents and caregivers of children with cancer [29], [31], [32]. Schools, social media and advertising companies, and governments also have a responsibility to protect children and adolescents from harm and educate them on how to use social media safely and responsibly [33] The events that occurred in 2020/2021, such as the ongoing climate emergency, bushfires in Australia and the COVID-19 pandemic, reflect the human-caused environmental issues that young people are most concerned about and also exacerbate the mental health issues they have reported being at crisis point in 2019 [34] . A study found that environmental factors, such as perception of the surrounding environment, can significantly predict mental health indicators in young people aged 15 to 17 years [35] Another study found that self-esteem mediates the impact of epilepsyspecific factors and environmental factors on mental health outcomes in young people with epilepsy [36] It is very important to listen to adolescent views on mental health issues because these problems are common among young people, and exposure to stigmatization is an additional burden, leading to increased suffering [14], [17] Social media is a huge force in young people's lives with far-reaching effects on their development, and little research has been done on the impact of social media on young people's mental illness [24] The relationship between highly visual social media and young people's mental health remains unclear, and there is still little data exclusively examining highly visual social media [25] Social media use can negatively impact mental health and lead to addiction, but it can also help people to stay connected with friends and family during the COVID-19 pandemic [9], [10]. The impact of COVID-19 on young people's mental health has been a concern, and young people's discussions on social media about the impact of COVID-19 on their mental health have been analyzed thematically [37].
Research on how social media affects young people's mental health is, however, lacking. Social media users have experienced both good and bad effects as a result of the COVID-19 epidemic [28], [37]. Although parents, social media, and advertising companies also have a duty to protect children and adolescents from harm, schools play a significant role in teaching young people how to use social media safely and responsibly [11]. It is important to understand the psychological effects of COVID-19 on young people and how these effects fit into the pre-existing social environment. Therefore, there is a pressing need for more research on how social media use and the surrounding environment affect young people's mental health in Sukabumi. This will help us understand the potential risks and benefits of social media use and help us create the right kind of support for young people's mental health.
---
LITERATURE REVIEW
---
Social Media Use and Mental Health
A number of studies have reported a correlation between social media use and poor mental health among young people. Research conducted by [21], [30] found that participants who used scocial media instragram, and facebook for a week reported decreased subjective well-being and increased feelings of loneliness and isolation. Similar findings were made by Woods and Scott (2016), who discovered that heavy social media use was linked to more severe anxiety and depressive symptoms.
The link between social media usage and mental health consequences is complicated, though, and not all research have shown adverse correlations. For instance, a research [16], [22] revealed that teen usage of social media was not linked to depressed symptoms. In addition, several studies have found that social media use can have a positive effect on mental health outcomes, such as increased social support and self-esteem [3].
---
Environmental Factors and Mental Health
Environmental factors were also found to play an important role in mental health outcomes among young people. For example, a study by [38] found that exposure to green space is associated with lower stress levels and better mental health outcomes. Similarly, a study by [39], [40] found that exposure to the natural environment was associated with increased attention capacity and reduced ADHD symptoms among children.
Conversely, exposure to negative environmental factors, such as air pollution and noise pollution, has been found to have a negative impact on mental health outcomes. [39], [41] found that exposure to air pollution was associated with an increased risk of depression and anxiety symptoms among adolescents.
---
METHODS
This study will use a cross-sectional study design to examine the relationship between social media use, environmental factors, and mental health outcomes among young people in Sukabumi. A cross-sectional study is a type of observational research design that collects data at a single point in time. This research design is useful for investigating the prevalence of a particular phenomenon, as well as examining relationships between variables [42] The participants for this study were young people aged between 18 and 24 years who lived in Sukabumi as many as 400 young people. We will use convenience sampling methods to recruit participants from local universities and community organizations. The inclusion criteria for participating in the study were:
Between 18 and 24 years old 1. Residing in Sukabumi 2. Regularly use social media platforms 3. Willing to participate in this research.
---
RESULTS AND DISCUSSION
---
Sample Characteristics
A total of 400 young people between the ages of 18 and 24 participated in the study. The mean age of the sample was 21.2 years (SD = 1.5), and 60% of the sample identified as female. The majority of the sample (78%) were college students, and 68% reported living in urban areas.
Participants reported using social media platforms an average of 3.6 hours per day (SD = 1.8). The most used platform is Instagram (78%), followed by Facebook (67%), and WhatsApp (52%).
Participants reported moderate levels of environmental exposure to air pollution (M=3.4, SD=0.8) and noise pollution (M=3.3, SD=0.9), and low levels of exposure to green space (M=2.1, SD=0.6).
Participants reported moderate levels of depression (M = 12.6, SD = 6.7), anxiety (M = 10.8, SD = 6.2), and stress (M = 13.1, SD = 7.1) over the past week.
---
Multiple Regression Analysis
To test the association between social media use, environmental factors, and mental health outcomes, multiple regression analyses were performed. Age and gender were included as control variables in the analysis.
The results of multiple regression analysis, the overall model was statistically significant (F (5,194) = 23.87, sig <.001), showed that predictors explained a large amount of variance in mental health outcomes.
According to the results of the regression study, social media usage, air pollution, noise pollution, and green open spaces were all very significant predictors of the outcomes in terms of mental health. Particularly, more frequent use of social media was linked to higher levels of stress, anxiety, and depression. Additionally connected to higher levels of depression, anxiety, and stress are air and noise pollution. On the other hand, more exposure to green space was linked to reduced levels of stress, anxiety, and sadness. In addition, gender has a crucial role in predicting the results of mental health, with women reporting higher levels of stress, anxiety, and depression than males. The results of mental health are not significantly predicted by age.
---
Discussion
According to the study's findings, social media usage, contextual variables, and gender are significant indicators of young people's mental health in Sukabumi. Particularly, more frequent use of social media was linked to higher levels of stress, anxiety, and depression. These results are in line with earlier studies that found social media usage to be a risk factor for young people's poor mental health outcomes [10], [16], [21], [22], [26], [30].
The findings also revealed that environmental variables, including noise pollution, air pollution, and green open spaces, were powerful indicators of the course of mental health. More specifically, increased exposure to green space was linked to lower levels of sadness, anxiety, and stress whereas higher exposure to air and noise pollution was linked to higher levels of these emotions. These results are in line with other studies that found environmental variables as significant predictors of outcomes related to mental health [38], [41], [43].
Another important predictor of mental health outcomes was found to be gender, with women reporting greater levels of stress, anxiety, and depression than males. These results are in line with other research that found gender variations in mental health outcomes [2]. Overall, these findings emphasize the significance of taking social media usage, contextual variables, and gender into account when analyzing the outcomes of young people in Sukabumi's mental health. The results suggest that interventions aimed at promoting mental health among young people should consider addressing social media use, environmental factors, and gender-related factors.
---
CONCLUSION
In conclusion, this study provides evidence that social media use, environmental factors, and gender are important predictors of mental health among young people in Sukabumi. These findings suggest that interventions aimed at promoting mental health among young people should consider addressing social media use, environmental factors, and gender-related factors. Future studies using longitudinal designs may provide more definitive evidence of causal links between these factors and mental health outcomes. | 17,180 | 1,438 |